2024-09-19 23:17:25 +09:00
|
|
|
---
|
Adjutant official project status
Adjuant is a service built to help manage certain elements of operations
processes by providing micro APIs around complex underlying workflow. It
started life as a system to manage sign ups to a public cloud, and grew
into a more generic and flexible system into which deployers can add
useful APIs for their users to consume.
The project history can be found on the docs[1], as well as some guide
lines which explain what the scope of the project is, and how we manage
how vague it is[2]. Then there is a section going over what is built
into Adjutant[3].
The project was built to be integrated from the beginning with
OpenStack, using KeystoneMiddleware for auth, will be moving to oslo
policy in the near future, and we hope to switch to using the
OpenStackSDK for all OpenStack API interactions.
Adjutant meets all of the new project requirements easily. The source
is Apache License 2.0, and all libraries used are opensource. While the
project was started internally at Catalyst we use Launchpad for bug and
blueprints tracking and all of us are developers already working
upstream with OpenStack. Code review and gated testing is done on the
OpenStack infrastructure, and the goal is always to support features in
other projects first (see guide-lines doc[2]). We want to build Adjutant
to be useful to the community and have taken care to build ways to keep
company specific logic out of the service by providing plugin mechanisms.
Catalyst Cloud[4] is a company dedicated to opensource, and we always
prefer working with others to achieve a goal that works for everyone
rather than writing internal only solutions. Almost all our work is
opensource where applicable, and we are active in the community.
Catalyst Cloud runs Adjutant in production (very near to master), and we
know of at least another company using it for password resets, as well
as interest from a few more. For Catalyst Cloud it handles our full
automated sign up process, and we run almost all of the default Tasks as
defined in the core codebase for user management.
[1]: https://adjutant.readthedocs.io/en/latest/history.html
[2]: https://adjutant.readthedocs.io/en/latest/guide-lines.html
[3]: https://adjutant.readthedocs.io/en/latest/features.html
[4]: https://catalystcloud.nz/
Change-Id: I0d119fa26b7ed8969870ad0c3f405e0ac3df98e3
2018-03-16 13:41:10 +13:00
|
|
|
adjutant:
|
|
|
|
ptl:
|
2022-09-22 13:07:42 -05:00
|
|
|
name: Dale Smith
|
|
|
|
irc: dalees
|
|
|
|
email: dale@catalystcloud.nz
|
2020-04-03 11:27:43 -05:00
|
|
|
appointed:
|
|
|
|
- victoria
|
2021-09-08 09:39:06 -05:00
|
|
|
- yoga
|
2022-07-13 15:10:49 +12:00
|
|
|
- zed
|
2023-03-10 18:14:20 -06:00
|
|
|
- '2023.1'
|
Adjutant official project status
Adjuant is a service built to help manage certain elements of operations
processes by providing micro APIs around complex underlying workflow. It
started life as a system to manage sign ups to a public cloud, and grew
into a more generic and flexible system into which deployers can add
useful APIs for their users to consume.
The project history can be found on the docs[1], as well as some guide
lines which explain what the scope of the project is, and how we manage
how vague it is[2]. Then there is a section going over what is built
into Adjutant[3].
The project was built to be integrated from the beginning with
OpenStack, using KeystoneMiddleware for auth, will be moving to oslo
policy in the near future, and we hope to switch to using the
OpenStackSDK for all OpenStack API interactions.
Adjutant meets all of the new project requirements easily. The source
is Apache License 2.0, and all libraries used are opensource. While the
project was started internally at Catalyst we use Launchpad for bug and
blueprints tracking and all of us are developers already working
upstream with OpenStack. Code review and gated testing is done on the
OpenStack infrastructure, and the goal is always to support features in
other projects first (see guide-lines doc[2]). We want to build Adjutant
to be useful to the community and have taken care to build ways to keep
company specific logic out of the service by providing plugin mechanisms.
Catalyst Cloud[4] is a company dedicated to opensource, and we always
prefer working with others to achieve a goal that works for everyone
rather than writing internal only solutions. Almost all our work is
opensource where applicable, and we are active in the community.
Catalyst Cloud runs Adjutant in production (very near to master), and we
know of at least another company using it for password resets, as well
as interest from a few more. For Catalyst Cloud it handles our full
automated sign up process, and we run almost all of the default Tasks as
defined in the core codebase for user management.
[1]: https://adjutant.readthedocs.io/en/latest/history.html
[2]: https://adjutant.readthedocs.io/en/latest/guide-lines.html
[3]: https://adjutant.readthedocs.io/en/latest/features.html
[4]: https://catalystcloud.nz/
Change-Id: I0d119fa26b7ed8969870ad0c3f405e0ac3df98e3
2018-03-16 13:41:10 +13:00
|
|
|
irc-channel: openstack-adjutant
|
|
|
|
service: Operations Processes automation
|
|
|
|
mission: >
|
|
|
|
To provide an extensible API framework for exposing to users an
|
|
|
|
organization's automated business processes relating to account management
|
|
|
|
across OpenStack and external systems, that can be adapted to the unique
|
|
|
|
requirements of an organization's processes.
|
|
|
|
url: http://adjutant.readthedocs.io/
|
|
|
|
deliverables:
|
|
|
|
adjutant:
|
|
|
|
repos:
|
|
|
|
- openstack/adjutant
|
|
|
|
adjutant-ui:
|
|
|
|
repos:
|
|
|
|
- openstack/adjutant-ui
|
|
|
|
python-adjutantclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-adjutantclient
|
2015-09-02 10:47:14 -05:00
|
|
|
barbican:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2022-09-20 23:24:42 +09:00
|
|
|
name: Grzegorz Grasza
|
|
|
|
irc: xek
|
|
|
|
email: xek@redhat.com
|
2020-04-03 11:37:56 -05:00
|
|
|
appointed:
|
|
|
|
- victoria
|
2021-03-11 13:26:52 -06:00
|
|
|
- xena
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-barbican
|
2015-07-14 12:15:21 -05:00
|
|
|
service: Key Manager service
|
2014-07-23 10:40:47 -05:00
|
|
|
mission: >
|
2015-07-28 12:11:57 +02:00
|
|
|
To produce a secret storage and generation system capable of providing key
|
|
|
|
management for services wishing to enable encryption features.
|
|
|
|
url: https://wiki.openstack.org/wiki/Barbican
|
2015-07-16 15:57:12 +02:00
|
|
|
deliverables:
|
2015-07-28 12:11:57 +02:00
|
|
|
barbican:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/barbican
|
2020-08-25 15:26:47 -05:00
|
|
|
ansible-role-atos-hsm:
|
|
|
|
repos:
|
|
|
|
- openstack/ansible-role-atos-hsm
|
2020-05-27 14:38:57 -05:00
|
|
|
ansible-role-lunasa-hsm:
|
2020-04-20 16:27:23 -05:00
|
|
|
repos:
|
|
|
|
- openstack/ansible-role-lunasa-hsm
|
2020-08-25 15:26:47 -05:00
|
|
|
ansible-role-thales-hsm:
|
|
|
|
repos:
|
|
|
|
- openstack/ansible-role-thales-hsm
|
2015-07-28 12:11:57 +02:00
|
|
|
barbican-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/barbican-specs
|
2016-11-30 09:23:23 +01:00
|
|
|
barbican-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/barbican-tempest-plugin
|
2019-10-15 07:45:24 -05:00
|
|
|
barbican-ui:
|
|
|
|
repos:
|
|
|
|
- openstack/barbican-ui
|
2015-07-28 12:11:57 +02:00
|
|
|
python-barbicanclient:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/python-barbicanclient
|
Blazar - Resource Reservation Service
The mission of Blazar project[1] is to provide resource reservations
in OpenStack clouds for virtual resouces and physical resources.
Blazar ensures cloud users can deploy their resources during specific time
frames, using reservations. Blazar creates reservations when it recieves
requests from cloud users about future usages of resources. It finds
out if the clouds can accommodate the usage requests in the reservation
time frame. A request can define reservations of various kinds of
resources, such as virtual resources (Nova instances or Cinder volumes)
as well as physical resources (Nova hypervisor or Cinder storage).
The features of Blazar are interesting varied kind of OpenStack clouds users,
such as Scientific WGs and Enterprise as well as OPNFV Promise. The active
team members are coming from each area, and its development cycle is
neutral for all area. Additionally, Blazar tends to solve some problems
described in the user story[2] defined by the Product WGs.
The development cycle of the team is following OpenStack patterns.
The team think operator feedbacks are much important to improve Blazar
project and its community. So the priorities for each cycle have been
decided based on not only development requests but operator feedbacks.
The Blazar code is not implemented from scratch by the current active
members. Blazar project was founded at the Icehouse cycle, but inactive
between Kilo and Newton. At Barcelona Summit, the current members,
some are the real users of Blazar in production and others are operators
of NFV who need Blazar, gathered and started to revive Blazar project.
Blazar project and the team follow the 4 opens:
* Open Source:
The Blazar source code[3], specifications[4] and documentation[5] are
completely open source. All are released under the Apache 2.0 licence.
* Open Design:
The team openly works for all activities. For release milestones, the team
discussed problems Blazar was facing, work items the team was targeting,
and project's priorities in etherpad[6], [7]. Additionally, the team had
meetings in both PTG[7] and Boston Summit[8].
* Open Development:
All Blazar code is reviewed and hosted in OpenStack CI[9]. The CI
supports unit tests and Tempest scenario test. All blueprint and bugs
are tracked by Launchpad[10].
* Open Community:
All new design and bugs are open in Blazar Launchpad[10]. The team
weekly IRC meeting[11] is held in a IRC channel for OpenStack projects
and #openstack-blazar channel is open to everyone who want to discuss
Blazar-related topics.
1. https://wiki.openstack.org/wiki/Blazar
2. http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/capacity_management.html
3. https://github.com/openstack/blazar
4. https://github.com/openstack/blazar/tree/master/doc/source/devref
5. http://blazar.readthedocs.io/en/latest/
6. https://etherpad.openstack.org/p/Blazar_status_2016
7. https://etherpad.openstack.org/p/blazar-ptg-pike
8. https://etherpad.openstack.org/p/blazar-boston-summit
9. https://review.openstack.org/#/q/status:open+blazar
10. https://launchpad.net/blazar
11. http://eavesdrop.openstack.org/#Blazar_Team_Meeting
Change-Id: I7473dc15b553195dbbe2665ade9ecf6a8bc21839
2017-07-12 18:00:26 +09:00
|
|
|
blazar:
|
|
|
|
ptl:
|
2018-08-08 12:54:10 +10:00
|
|
|
name: Pierre Riteau
|
|
|
|
irc: priteau
|
|
|
|
email: pierre@stackhpc.com
|
Blazar - Resource Reservation Service
The mission of Blazar project[1] is to provide resource reservations
in OpenStack clouds for virtual resouces and physical resources.
Blazar ensures cloud users can deploy their resources during specific time
frames, using reservations. Blazar creates reservations when it recieves
requests from cloud users about future usages of resources. It finds
out if the clouds can accommodate the usage requests in the reservation
time frame. A request can define reservations of various kinds of
resources, such as virtual resources (Nova instances or Cinder volumes)
as well as physical resources (Nova hypervisor or Cinder storage).
The features of Blazar are interesting varied kind of OpenStack clouds users,
such as Scientific WGs and Enterprise as well as OPNFV Promise. The active
team members are coming from each area, and its development cycle is
neutral for all area. Additionally, Blazar tends to solve some problems
described in the user story[2] defined by the Product WGs.
The development cycle of the team is following OpenStack patterns.
The team think operator feedbacks are much important to improve Blazar
project and its community. So the priorities for each cycle have been
decided based on not only development requests but operator feedbacks.
The Blazar code is not implemented from scratch by the current active
members. Blazar project was founded at the Icehouse cycle, but inactive
between Kilo and Newton. At Barcelona Summit, the current members,
some are the real users of Blazar in production and others are operators
of NFV who need Blazar, gathered and started to revive Blazar project.
Blazar project and the team follow the 4 opens:
* Open Source:
The Blazar source code[3], specifications[4] and documentation[5] are
completely open source. All are released under the Apache 2.0 licence.
* Open Design:
The team openly works for all activities. For release milestones, the team
discussed problems Blazar was facing, work items the team was targeting,
and project's priorities in etherpad[6], [7]. Additionally, the team had
meetings in both PTG[7] and Boston Summit[8].
* Open Development:
All Blazar code is reviewed and hosted in OpenStack CI[9]. The CI
supports unit tests and Tempest scenario test. All blueprint and bugs
are tracked by Launchpad[10].
* Open Community:
All new design and bugs are open in Blazar Launchpad[10]. The team
weekly IRC meeting[11] is held in a IRC channel for OpenStack projects
and #openstack-blazar channel is open to everyone who want to discuss
Blazar-related topics.
1. https://wiki.openstack.org/wiki/Blazar
2. http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/capacity_management.html
3. https://github.com/openstack/blazar
4. https://github.com/openstack/blazar/tree/master/doc/source/devref
5. http://blazar.readthedocs.io/en/latest/
6. https://etherpad.openstack.org/p/Blazar_status_2016
7. https://etherpad.openstack.org/p/blazar-ptg-pike
8. https://etherpad.openstack.org/p/blazar-boston-summit
9. https://review.openstack.org/#/q/status:open+blazar
10. https://launchpad.net/blazar
11. http://eavesdrop.openstack.org/#Blazar_Team_Meeting
Change-Id: I7473dc15b553195dbbe2665ade9ecf6a8bc21839
2017-07-12 18:00:26 +09:00
|
|
|
irc-channel: openstack-blazar
|
|
|
|
service: Resource reservation service
|
|
|
|
mission: >
|
|
|
|
Blazar's goal is to provide resource reservations in
|
|
|
|
OpenStack clouds for different resource types, both
|
|
|
|
virtual (instances, volumes, etc) and physical (hosts,
|
|
|
|
storage, etc.).
|
|
|
|
url: https://wiki.openstack.org/wiki/Blazar
|
|
|
|
deliverables:
|
|
|
|
blazar:
|
|
|
|
repos:
|
|
|
|
- openstack/blazar
|
2018-10-22 16:54:32 +01:00
|
|
|
blazar-dashboard:
|
|
|
|
repos:
|
|
|
|
- openstack/blazar-dashboard
|
Blazar - Resource Reservation Service
The mission of Blazar project[1] is to provide resource reservations
in OpenStack clouds for virtual resouces and physical resources.
Blazar ensures cloud users can deploy their resources during specific time
frames, using reservations. Blazar creates reservations when it recieves
requests from cloud users about future usages of resources. It finds
out if the clouds can accommodate the usage requests in the reservation
time frame. A request can define reservations of various kinds of
resources, such as virtual resources (Nova instances or Cinder volumes)
as well as physical resources (Nova hypervisor or Cinder storage).
The features of Blazar are interesting varied kind of OpenStack clouds users,
such as Scientific WGs and Enterprise as well as OPNFV Promise. The active
team members are coming from each area, and its development cycle is
neutral for all area. Additionally, Blazar tends to solve some problems
described in the user story[2] defined by the Product WGs.
The development cycle of the team is following OpenStack patterns.
The team think operator feedbacks are much important to improve Blazar
project and its community. So the priorities for each cycle have been
decided based on not only development requests but operator feedbacks.
The Blazar code is not implemented from scratch by the current active
members. Blazar project was founded at the Icehouse cycle, but inactive
between Kilo and Newton. At Barcelona Summit, the current members,
some are the real users of Blazar in production and others are operators
of NFV who need Blazar, gathered and started to revive Blazar project.
Blazar project and the team follow the 4 opens:
* Open Source:
The Blazar source code[3], specifications[4] and documentation[5] are
completely open source. All are released under the Apache 2.0 licence.
* Open Design:
The team openly works for all activities. For release milestones, the team
discussed problems Blazar was facing, work items the team was targeting,
and project's priorities in etherpad[6], [7]. Additionally, the team had
meetings in both PTG[7] and Boston Summit[8].
* Open Development:
All Blazar code is reviewed and hosted in OpenStack CI[9]. The CI
supports unit tests and Tempest scenario test. All blueprint and bugs
are tracked by Launchpad[10].
* Open Community:
All new design and bugs are open in Blazar Launchpad[10]. The team
weekly IRC meeting[11] is held in a IRC channel for OpenStack projects
and #openstack-blazar channel is open to everyone who want to discuss
Blazar-related topics.
1. https://wiki.openstack.org/wiki/Blazar
2. http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/capacity_management.html
3. https://github.com/openstack/blazar
4. https://github.com/openstack/blazar/tree/master/doc/source/devref
5. http://blazar.readthedocs.io/en/latest/
6. https://etherpad.openstack.org/p/Blazar_status_2016
7. https://etherpad.openstack.org/p/blazar-ptg-pike
8. https://etherpad.openstack.org/p/blazar-boston-summit
9. https://review.openstack.org/#/q/status:open+blazar
10. https://launchpad.net/blazar
11. http://eavesdrop.openstack.org/#Blazar_Team_Meeting
Change-Id: I7473dc15b553195dbbe2665ade9ecf6a8bc21839
2017-07-12 18:00:26 +09:00
|
|
|
blazar-nova:
|
|
|
|
repos:
|
|
|
|
- openstack/blazar-nova
|
2018-10-22 16:54:32 +01:00
|
|
|
blazar-specs:
|
2018-11-28 13:20:19 +01:00
|
|
|
release-management: none
|
2018-10-22 16:54:32 +01:00
|
|
|
repos:
|
|
|
|
- openstack/blazar-specs
|
2017-12-18 14:51:32 +05:30
|
|
|
blazar-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/blazar-tempest-plugin
|
Blazar - Resource Reservation Service
The mission of Blazar project[1] is to provide resource reservations
in OpenStack clouds for virtual resouces and physical resources.
Blazar ensures cloud users can deploy their resources during specific time
frames, using reservations. Blazar creates reservations when it recieves
requests from cloud users about future usages of resources. It finds
out if the clouds can accommodate the usage requests in the reservation
time frame. A request can define reservations of various kinds of
resources, such as virtual resources (Nova instances or Cinder volumes)
as well as physical resources (Nova hypervisor or Cinder storage).
The features of Blazar are interesting varied kind of OpenStack clouds users,
such as Scientific WGs and Enterprise as well as OPNFV Promise. The active
team members are coming from each area, and its development cycle is
neutral for all area. Additionally, Blazar tends to solve some problems
described in the user story[2] defined by the Product WGs.
The development cycle of the team is following OpenStack patterns.
The team think operator feedbacks are much important to improve Blazar
project and its community. So the priorities for each cycle have been
decided based on not only development requests but operator feedbacks.
The Blazar code is not implemented from scratch by the current active
members. Blazar project was founded at the Icehouse cycle, but inactive
between Kilo and Newton. At Barcelona Summit, the current members,
some are the real users of Blazar in production and others are operators
of NFV who need Blazar, gathered and started to revive Blazar project.
Blazar project and the team follow the 4 opens:
* Open Source:
The Blazar source code[3], specifications[4] and documentation[5] are
completely open source. All are released under the Apache 2.0 licence.
* Open Design:
The team openly works for all activities. For release milestones, the team
discussed problems Blazar was facing, work items the team was targeting,
and project's priorities in etherpad[6], [7]. Additionally, the team had
meetings in both PTG[7] and Boston Summit[8].
* Open Development:
All Blazar code is reviewed and hosted in OpenStack CI[9]. The CI
supports unit tests and Tempest scenario test. All blueprint and bugs
are tracked by Launchpad[10].
* Open Community:
All new design and bugs are open in Blazar Launchpad[10]. The team
weekly IRC meeting[11] is held in a IRC channel for OpenStack projects
and #openstack-blazar channel is open to everyone who want to discuss
Blazar-related topics.
1. https://wiki.openstack.org/wiki/Blazar
2. http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/capacity_management.html
3. https://github.com/openstack/blazar
4. https://github.com/openstack/blazar/tree/master/doc/source/devref
5. http://blazar.readthedocs.io/en/latest/
6. https://etherpad.openstack.org/p/Blazar_status_2016
7. https://etherpad.openstack.org/p/blazar-ptg-pike
8. https://etherpad.openstack.org/p/blazar-boston-summit
9. https://review.openstack.org/#/q/status:open+blazar
10. https://launchpad.net/blazar
11. http://eavesdrop.openstack.org/#Blazar_Team_Meeting
Change-Id: I7473dc15b553195dbbe2665ade9ecf6a8bc21839
2017-07-12 18:00:26 +09:00
|
|
|
python-blazarclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-blazarclient
|
2015-09-02 10:47:14 -05:00
|
|
|
cinder:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2024-03-22 09:00:21 +09:00
|
|
|
name: Jon Bernard
|
|
|
|
irc: No nick supplied
|
|
|
|
email: jobernar@redhat.com
|
2015-05-29 18:20:33 +00:00
|
|
|
irc-channel: openstack-cinder
|
2015-07-14 12:15:21 -05:00
|
|
|
service: Block Storage service
|
2014-07-23 10:40:47 -05:00
|
|
|
mission: >
|
|
|
|
To implement services and libraries to provide on-demand, self-service
|
|
|
|
access to Block Storage resources via abstraction and automation on top of
|
|
|
|
other block storage devices.
|
2013-08-30 18:09:28 +02:00
|
|
|
url: https://wiki.openstack.org/wiki/Cinder
|
2015-07-16 15:57:12 +02:00
|
|
|
deliverables:
|
|
|
|
cinder:
|
|
|
|
repos:
|
|
|
|
- openstack/cinder
|
|
|
|
cinder-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
|
|
|
- openstack/cinder-specs
|
2017-07-22 07:06:12 -04:00
|
|
|
cinder-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/cinder-tempest-plugin
|
2019-02-18 18:44:04 +01:00
|
|
|
cinderlib:
|
2023-12-10 12:56:48 -05:00
|
|
|
deprecated: '2024.1'
|
|
|
|
release-management: deprecated
|
2019-02-18 18:44:04 +01:00
|
|
|
repos:
|
|
|
|
- openstack/cinderlib
|
2015-07-16 15:57:12 +02:00
|
|
|
os-brick:
|
|
|
|
repos:
|
|
|
|
- openstack/os-brick
|
2015-11-09 14:28:21 +02:00
|
|
|
python-brick-cinderclient-ext:
|
|
|
|
repos:
|
|
|
|
- openstack/python-brick-cinderclient-ext
|
2015-07-16 15:57:12 +02:00
|
|
|
python-cinderclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-cinderclient
|
2021-01-26 15:26:00 -05:00
|
|
|
rbd-iscsi-client:
|
|
|
|
repos:
|
|
|
|
- openstack/rbd-iscsi-client
|
CloudKitty application for Big Tent
This is a request to add CloudKitty [1] to the Big Tent.
CloudKitty is a rating as a Service component aimed at easing
integration of OpenStack with existing billing systems. It consumes data
from metric systems (ceilometer for example) and with the help of rating
plugins and client defined rules, generates prices for the different
services.
The project is fully licensed under the Apache 2.0 license, and has been
actively promoted during the last 2 cycles including a presentation at
Vancouver summit, and another on stage presentation.
Stéphane Albert is currently acting as PTL. He has been working on the
project since the first POC shown at the OpenStack Summit Atlanta. We
plan on holding an official PTL election around the Tokyo summit.
All code changes are approved by the core members. CloudKitty code is
submitted and reviewed using the OpenStack infrastructure and gated
through gerrit[2][3][4].
We are starting to use both dev and operators mailing list in order to
give the community a way to provide feedback, keep up with our progress
and intentions.
Meanwhile we have been heavily using IRC for cooperation.
The #cloudkitty IRC channel on freenode is logged and the channel is
notified of such logging [5]. Community members, core developers and
users gather here to discuss about the future of the project as well as
questions regarding usage.
We have recently booked a slot on #openstack-meeting-3 in for our
regular meetings.
CloudKitty has been initiated by a single company but since then we have
received several patches, and reviews from exterior contributors both
originating from different companies and unaffiliated.
Actually the whole design of CloudKitty has been improved and decided
during regular IRC meetings that have been held by contibutors from many
companies in order to reflect their needs and usages.
We are aware that CloudKitty has been used in production by several
Cloud User (both Cloud Public or Private) for several months.
[1]https://wiki.openstack.org/wiki/CloudKitty
[2]https://review.openstack.org/#/q/project:stackforge/cloudkitty,n,z
[3]https://review.openstack.org/#/q/project:stackforge/python-cloudkittyclient,n,z
[4]https://review.openstack.org/#/q/project:stackforge/cloudkitty-dashboard,n,z
[5]http://eavesdrop.openstack.org/irclogs/%23cloudkitty/
Change-Id: I1864953db026832e8c40707e212db0ef179fe480
2015-09-18 17:27:54 +02:00
|
|
|
cloudkitty:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2020-10-14 15:22:22 +02:00
|
|
|
name: Rafael Weingartner
|
2020-10-14 00:43:41 +00:00
|
|
|
irc: No nick supplied
|
2021-03-10 09:32:23 -08:00
|
|
|
email: rafael@apache.org
|
2020-04-03 11:43:14 -05:00
|
|
|
appointed:
|
|
|
|
- victoria
|
2020-10-14 15:22:22 +02:00
|
|
|
- wallaby
|
CloudKitty application for Big Tent
This is a request to add CloudKitty [1] to the Big Tent.
CloudKitty is a rating as a Service component aimed at easing
integration of OpenStack with existing billing systems. It consumes data
from metric systems (ceilometer for example) and with the help of rating
plugins and client defined rules, generates prices for the different
services.
The project is fully licensed under the Apache 2.0 license, and has been
actively promoted during the last 2 cycles including a presentation at
Vancouver summit, and another on stage presentation.
Stéphane Albert is currently acting as PTL. He has been working on the
project since the first POC shown at the OpenStack Summit Atlanta. We
plan on holding an official PTL election around the Tokyo summit.
All code changes are approved by the core members. CloudKitty code is
submitted and reviewed using the OpenStack infrastructure and gated
through gerrit[2][3][4].
We are starting to use both dev and operators mailing list in order to
give the community a way to provide feedback, keep up with our progress
and intentions.
Meanwhile we have been heavily using IRC for cooperation.
The #cloudkitty IRC channel on freenode is logged and the channel is
notified of such logging [5]. Community members, core developers and
users gather here to discuss about the future of the project as well as
questions regarding usage.
We have recently booked a slot on #openstack-meeting-3 in for our
regular meetings.
CloudKitty has been initiated by a single company but since then we have
received several patches, and reviews from exterior contributors both
originating from different companies and unaffiliated.
Actually the whole design of CloudKitty has been improved and decided
during regular IRC meetings that have been held by contibutors from many
companies in order to reflect their needs and usages.
We are aware that CloudKitty has been used in production by several
Cloud User (both Cloud Public or Private) for several months.
[1]https://wiki.openstack.org/wiki/CloudKitty
[2]https://review.openstack.org/#/q/project:stackforge/cloudkitty,n,z
[3]https://review.openstack.org/#/q/project:stackforge/python-cloudkittyclient,n,z
[4]https://review.openstack.org/#/q/project:stackforge/cloudkitty-dashboard,n,z
[5]http://eavesdrop.openstack.org/irclogs/%23cloudkitty/
Change-Id: I1864953db026832e8c40707e212db0ef179fe480
2015-09-18 17:27:54 +02:00
|
|
|
irc-channel: cloudkitty
|
|
|
|
service: Rating service
|
|
|
|
mission: >
|
|
|
|
CloudKitty is a rating component for OpenStack. Its goal is to process data
|
|
|
|
from different metric backends and implement rating rule creation. Its role
|
|
|
|
is to fit in-between the raw metrics from OpenStack and the billing system
|
|
|
|
of a provider for chargeback purposes.
|
|
|
|
url: https://wiki.openstack.org/wiki/CloudKitty
|
|
|
|
deliverables:
|
|
|
|
cloudkitty:
|
|
|
|
repos:
|
2015-11-07 22:36:04 +09:00
|
|
|
- openstack/cloudkitty
|
CloudKitty application for Big Tent
This is a request to add CloudKitty [1] to the Big Tent.
CloudKitty is a rating as a Service component aimed at easing
integration of OpenStack with existing billing systems. It consumes data
from metric systems (ceilometer for example) and with the help of rating
plugins and client defined rules, generates prices for the different
services.
The project is fully licensed under the Apache 2.0 license, and has been
actively promoted during the last 2 cycles including a presentation at
Vancouver summit, and another on stage presentation.
Stéphane Albert is currently acting as PTL. He has been working on the
project since the first POC shown at the OpenStack Summit Atlanta. We
plan on holding an official PTL election around the Tokyo summit.
All code changes are approved by the core members. CloudKitty code is
submitted and reviewed using the OpenStack infrastructure and gated
through gerrit[2][3][4].
We are starting to use both dev and operators mailing list in order to
give the community a way to provide feedback, keep up with our progress
and intentions.
Meanwhile we have been heavily using IRC for cooperation.
The #cloudkitty IRC channel on freenode is logged and the channel is
notified of such logging [5]. Community members, core developers and
users gather here to discuss about the future of the project as well as
questions regarding usage.
We have recently booked a slot on #openstack-meeting-3 in for our
regular meetings.
CloudKitty has been initiated by a single company but since then we have
received several patches, and reviews from exterior contributors both
originating from different companies and unaffiliated.
Actually the whole design of CloudKitty has been improved and decided
during regular IRC meetings that have been held by contibutors from many
companies in order to reflect their needs and usages.
We are aware that CloudKitty has been used in production by several
Cloud User (both Cloud Public or Private) for several months.
[1]https://wiki.openstack.org/wiki/CloudKitty
[2]https://review.openstack.org/#/q/project:stackforge/cloudkitty,n,z
[3]https://review.openstack.org/#/q/project:stackforge/python-cloudkittyclient,n,z
[4]https://review.openstack.org/#/q/project:stackforge/cloudkitty-dashboard,n,z
[5]http://eavesdrop.openstack.org/irclogs/%23cloudkitty/
Change-Id: I1864953db026832e8c40707e212db0ef179fe480
2015-09-18 17:27:54 +02:00
|
|
|
python-cloudkittyclient:
|
|
|
|
repos:
|
2015-11-07 22:36:04 +09:00
|
|
|
- openstack/python-cloudkittyclient
|
CloudKitty application for Big Tent
This is a request to add CloudKitty [1] to the Big Tent.
CloudKitty is a rating as a Service component aimed at easing
integration of OpenStack with existing billing systems. It consumes data
from metric systems (ceilometer for example) and with the help of rating
plugins and client defined rules, generates prices for the different
services.
The project is fully licensed under the Apache 2.0 license, and has been
actively promoted during the last 2 cycles including a presentation at
Vancouver summit, and another on stage presentation.
Stéphane Albert is currently acting as PTL. He has been working on the
project since the first POC shown at the OpenStack Summit Atlanta. We
plan on holding an official PTL election around the Tokyo summit.
All code changes are approved by the core members. CloudKitty code is
submitted and reviewed using the OpenStack infrastructure and gated
through gerrit[2][3][4].
We are starting to use both dev and operators mailing list in order to
give the community a way to provide feedback, keep up with our progress
and intentions.
Meanwhile we have been heavily using IRC for cooperation.
The #cloudkitty IRC channel on freenode is logged and the channel is
notified of such logging [5]. Community members, core developers and
users gather here to discuss about the future of the project as well as
questions regarding usage.
We have recently booked a slot on #openstack-meeting-3 in for our
regular meetings.
CloudKitty has been initiated by a single company but since then we have
received several patches, and reviews from exterior contributors both
originating from different companies and unaffiliated.
Actually the whole design of CloudKitty has been improved and decided
during regular IRC meetings that have been held by contibutors from many
companies in order to reflect their needs and usages.
We are aware that CloudKitty has been used in production by several
Cloud User (both Cloud Public or Private) for several months.
[1]https://wiki.openstack.org/wiki/CloudKitty
[2]https://review.openstack.org/#/q/project:stackforge/cloudkitty,n,z
[3]https://review.openstack.org/#/q/project:stackforge/python-cloudkittyclient,n,z
[4]https://review.openstack.org/#/q/project:stackforge/cloudkitty-dashboard,n,z
[5]http://eavesdrop.openstack.org/irclogs/%23cloudkitty/
Change-Id: I1864953db026832e8c40707e212db0ef179fe480
2015-09-18 17:27:54 +02:00
|
|
|
cloudkitty-dashboard:
|
|
|
|
repos:
|
2015-11-07 22:36:04 +09:00
|
|
|
- openstack/cloudkitty-dashboard
|
2016-11-23 17:23:12 +01:00
|
|
|
cloudkitty-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2016-11-23 17:23:12 +01:00
|
|
|
repos:
|
|
|
|
- openstack/cloudkitty-specs
|
2017-10-31 14:39:47 +01:00
|
|
|
cloudkitty-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/cloudkitty-tempest-plugin
|
Cyborg Application For Official Projects
First of all thanks for TC audience for our project update at PTG.
The overview slide presented at the meeting could be found here[0].
**What is Cyborg**
Cyborg is a new project that aims to provide a general management
framework for accelerators[1], such as FPGA, GPU, ARM SoCs, NVMe
SSDs,DPDK/SPDK,eBPF/XDP and so forth, basically any types of dedicated
hardware/software that designed specifically to accelerate the
performance of the general computing infrastructure.
**Motivation**
The motivation behind Cyborg projects orginally came from NFV in Telco,
and later on more from Scientific WG on HPC and public cloud providers
who want to provide FPGA/GPU instances. Details of the requirement for
Cyborg could be found in the slide[0]
**Two Important Facts**
- Cyborg is started from scratch within the community, which means there
was no code dump from vendor and we wrote every specs and code implement-
ation from zero in an OpenStack way.
- Every decision in Cyborg project is done through election/voting, even
the project name was voted from team members.
**One More Fun Fact**
* We have the LONGEST team meeting in OpenStack (meeting starts late in
China timezone and often left open since the organizer passed out on desk,
we are working hard to improve on that :) )
**Functionality**
Cyborg's two main jobs are resource discovery and life cycle management,
which means Cyborg leaves the functionality of scheduling entirely to
Nova when the user needs a virtual machine deployment, or Kubernetes/Mesos
/Docker if the user needs container solution running on baremetal.
**Cross Projects Relationship**
As mentioned above, Cyborg interacts with Nova via Placement. Cyborg does
not reinvent any wheels or tries to drop in to replace any current modules.
It heavily relies on working with the current OpenStack structure. Nova's
FPGA/GPU support is very critical for Cyborg to be able to work for VM
related deployments.
In short, after cyborg-agent performed resource discovery operation via
various drivers it connects to, it reports the accelerator resource info
to Placement. Placement will then provide an aggregate view of both general
and accelerator resource for Nova scheduler. When Nova successfully spawn
an instance on a compute node with accelerator, nova will notify cyborg and
cyborg-api will notify cyborg-db to change the resource status from
"available" to "in use", similar to the Nova-Cinder interaction for volume.
Cyborg has constant discussion with the Nova/Placement team and
we will continue to do so in Queens and following cycles to make sure we got
an awesome collaboration.
Cyborg has also very good relationship with the Scientific WG and OPNFV
DPACC project. We gathered inputs/requirements from these groups.
**Scenario/Usage**
To put it very simply, if you have just one type of accelerator, you could
just use what Nova provides for PCI-passthrough functionalities. If you have
more than one types of accelerators, or you have one type of accelerator but
you want fine grained control, then you probably needs Cyborg on the side to
help you manage them, working alongside Nova.
**Community Principle Compliance**
Cyborg project follows the core principle of "Four Opens" within OpenStack
Community:
* Open Source
This goes without saying, all the Cyborg source code could be found at
OpenStack repo[2] or its github mirror[3], including feature codes,
documentations[4] and tests[5]. Apache Lisence 2.0 is applied as usual.
* Open Design
As mentioned above, Cyborg is designed from ground up within the community.
We have weekly team meetings[6] at #openstack-cyborg and public available/
searchable meeting logs[7]. We have organized PTG design sessions for each
of the past ones[8][9]. We have also used [acceleration] or [cyborg] for
mailinglist discussion.
* Open Community
Cyborg has a truely diverse team composition with no single vendor
dominance[10]. PTL of Cyborg comes from Huawei and current two core reviewers
come from RedHat and Lenovo. We have also gone through election/voting for
these positions[11][12].
* Open Development
All of the Cyborg pacthes could be found here[13]. We have a implicit policy
of the core that landed the patch should comes from a different company of the
ones that submit the patches. We try to review the patches as thorough as
possible and we are honored to have people from other team help reviewing,
for example[14][15]. Cyborg team also tries to utilize all the available
community infrastructure, e.g CI/CD[16]
The Cyborg project team has been doing everything according to the community's
tradition and best practices. Although it is far from perfect, we believe it is
on the right track. We also believe that by putting Cyborg under official
governance, it will show OpenStack's capability on heterogeneous computing that
provides the foundation for AI, Machine Learning, HPC, NFV and many other
cutting-edge technologies.
0. https://docs.google.com/presentation/d/1RyDDVMBsQndN-Qo_JInnHaj_oCY6zwT_3ky_gk1nJMo/edit?usp=sharing
1. https://wiki.openstack.org/wiki/Cyborg
2. https://review.openstack.org/gitweb?p=openstack/cyborg.git;a=summary
3. https://github.com/openstack/cyborg
4. https://github.com/openstack/cyborg/tree/master/doc/source
5. https://github.com/openstack/cyborg/tree/master/cyborg/tests
6. https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting
7. https://wiki.openstack.org/wiki/Cyborg/MeetingLogs
8. https://etherpad.openstack.org/p/cyborg-ptg-pike
9. https://etherpad.openstack.org/p/cyborg-queens-ptg
10. http://stackalytics.com/?project_type=openstack-others&module=cyborg&metric=person-day&release=all
11. https://openstack.nimeyo.com/113991/openstack-cyborg-nominate-rushil-justin-kilpatrick-reviewers
12. https://openstack.nimeyo.com/117806/openstack-dev-cyborg-queens-ptl-candidacy?show=117812
13. https://review.openstack.org/#/q/project:openstack/cyborg
14. https://review.openstack.org/#/c/448228/
15. https://review.openstack.org/#/c/445814/
16. https://review.openstack.org/#/c/504141/
Change-Id: If0679c80f72f48cbe5fbc50c0a182661e97793fe
Signed-off-by: zhipengh <huangzhipeng@huawei.com>
2017-09-13 16:04:01 -06:00
|
|
|
cyborg:
|
|
|
|
ptl:
|
2024-03-22 09:00:21 +09:00
|
|
|
name: alex song
|
2023-11-01 15:15:25 +08:00
|
|
|
irc: songwenping
|
|
|
|
email: songwenping@inspur.com
|
2019-09-05 15:30:03 +01:00
|
|
|
appointed:
|
|
|
|
- ussuri
|
2021-03-11 13:22:43 -06:00
|
|
|
- xena
|
2024-09-19 23:17:25 +09:00
|
|
|
- '2024.1'
|
Cyborg Application For Official Projects
First of all thanks for TC audience for our project update at PTG.
The overview slide presented at the meeting could be found here[0].
**What is Cyborg**
Cyborg is a new project that aims to provide a general management
framework for accelerators[1], such as FPGA, GPU, ARM SoCs, NVMe
SSDs,DPDK/SPDK,eBPF/XDP and so forth, basically any types of dedicated
hardware/software that designed specifically to accelerate the
performance of the general computing infrastructure.
**Motivation**
The motivation behind Cyborg projects orginally came from NFV in Telco,
and later on more from Scientific WG on HPC and public cloud providers
who want to provide FPGA/GPU instances. Details of the requirement for
Cyborg could be found in the slide[0]
**Two Important Facts**
- Cyborg is started from scratch within the community, which means there
was no code dump from vendor and we wrote every specs and code implement-
ation from zero in an OpenStack way.
- Every decision in Cyborg project is done through election/voting, even
the project name was voted from team members.
**One More Fun Fact**
* We have the LONGEST team meeting in OpenStack (meeting starts late in
China timezone and often left open since the organizer passed out on desk,
we are working hard to improve on that :) )
**Functionality**
Cyborg's two main jobs are resource discovery and life cycle management,
which means Cyborg leaves the functionality of scheduling entirely to
Nova when the user needs a virtual machine deployment, or Kubernetes/Mesos
/Docker if the user needs container solution running on baremetal.
**Cross Projects Relationship**
As mentioned above, Cyborg interacts with Nova via Placement. Cyborg does
not reinvent any wheels or tries to drop in to replace any current modules.
It heavily relies on working with the current OpenStack structure. Nova's
FPGA/GPU support is very critical for Cyborg to be able to work for VM
related deployments.
In short, after cyborg-agent performed resource discovery operation via
various drivers it connects to, it reports the accelerator resource info
to Placement. Placement will then provide an aggregate view of both general
and accelerator resource for Nova scheduler. When Nova successfully spawn
an instance on a compute node with accelerator, nova will notify cyborg and
cyborg-api will notify cyborg-db to change the resource status from
"available" to "in use", similar to the Nova-Cinder interaction for volume.
Cyborg has constant discussion with the Nova/Placement team and
we will continue to do so in Queens and following cycles to make sure we got
an awesome collaboration.
Cyborg has also very good relationship with the Scientific WG and OPNFV
DPACC project. We gathered inputs/requirements from these groups.
**Scenario/Usage**
To put it very simply, if you have just one type of accelerator, you could
just use what Nova provides for PCI-passthrough functionalities. If you have
more than one types of accelerators, or you have one type of accelerator but
you want fine grained control, then you probably needs Cyborg on the side to
help you manage them, working alongside Nova.
**Community Principle Compliance**
Cyborg project follows the core principle of "Four Opens" within OpenStack
Community:
* Open Source
This goes without saying, all the Cyborg source code could be found at
OpenStack repo[2] or its github mirror[3], including feature codes,
documentations[4] and tests[5]. Apache Lisence 2.0 is applied as usual.
* Open Design
As mentioned above, Cyborg is designed from ground up within the community.
We have weekly team meetings[6] at #openstack-cyborg and public available/
searchable meeting logs[7]. We have organized PTG design sessions for each
of the past ones[8][9]. We have also used [acceleration] or [cyborg] for
mailinglist discussion.
* Open Community
Cyborg has a truely diverse team composition with no single vendor
dominance[10]. PTL of Cyborg comes from Huawei and current two core reviewers
come from RedHat and Lenovo. We have also gone through election/voting for
these positions[11][12].
* Open Development
All of the Cyborg pacthes could be found here[13]. We have a implicit policy
of the core that landed the patch should comes from a different company of the
ones that submit the patches. We try to review the patches as thorough as
possible and we are honored to have people from other team help reviewing,
for example[14][15]. Cyborg team also tries to utilize all the available
community infrastructure, e.g CI/CD[16]
The Cyborg project team has been doing everything according to the community's
tradition and best practices. Although it is far from perfect, we believe it is
on the right track. We also believe that by putting Cyborg under official
governance, it will show OpenStack's capability on heterogeneous computing that
provides the foundation for AI, Machine Learning, HPC, NFV and many other
cutting-edge technologies.
0. https://docs.google.com/presentation/d/1RyDDVMBsQndN-Qo_JInnHaj_oCY6zwT_3ky_gk1nJMo/edit?usp=sharing
1. https://wiki.openstack.org/wiki/Cyborg
2. https://review.openstack.org/gitweb?p=openstack/cyborg.git;a=summary
3. https://github.com/openstack/cyborg
4. https://github.com/openstack/cyborg/tree/master/doc/source
5. https://github.com/openstack/cyborg/tree/master/cyborg/tests
6. https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting
7. https://wiki.openstack.org/wiki/Cyborg/MeetingLogs
8. https://etherpad.openstack.org/p/cyborg-ptg-pike
9. https://etherpad.openstack.org/p/cyborg-queens-ptg
10. http://stackalytics.com/?project_type=openstack-others&module=cyborg&metric=person-day&release=all
11. https://openstack.nimeyo.com/113991/openstack-cyborg-nominate-rushil-justin-kilpatrick-reviewers
12. https://openstack.nimeyo.com/117806/openstack-dev-cyborg-queens-ptl-candidacy?show=117812
13. https://review.openstack.org/#/q/project:openstack/cyborg
14. https://review.openstack.org/#/c/448228/
15. https://review.openstack.org/#/c/445814/
16. https://review.openstack.org/#/c/504141/
Change-Id: If0679c80f72f48cbe5fbc50c0a182661e97793fe
Signed-off-by: zhipengh <huangzhipeng@huawei.com>
2017-09-13 16:04:01 -06:00
|
|
|
irc-channel: openstack-cyborg
|
|
|
|
service: Accelerator Life Cycle Management
|
|
|
|
mission: >
|
|
|
|
To provide a general management framework for accelerators (FPGA,GPU,SoC,
|
|
|
|
NVMe SSD,DPDK/SPDK,eBPF/XDP ...)
|
|
|
|
url: https://wiki.openstack.org/wiki/Cyborg
|
|
|
|
deliverables:
|
|
|
|
cyborg:
|
|
|
|
repos:
|
|
|
|
- openstack/cyborg
|
2018-03-17 19:16:58 +08:00
|
|
|
cyborg-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2018-03-17 19:16:58 +08:00
|
|
|
repos:
|
|
|
|
- openstack/cyborg-specs
|
2018-03-23 09:16:32 +00:00
|
|
|
python-cyborgclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-cyborgclient
|
2019-05-08 02:38:51 -07:00
|
|
|
cyborg-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/cyborg-tempest-plugin
|
2015-09-02 10:47:14 -05:00
|
|
|
designate:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2020-10-14 00:43:41 +00:00
|
|
|
name: Michael Johnson
|
|
|
|
irc: johnsom
|
|
|
|
email: johnsomor@gmail.com
|
2019-09-18 12:57:12 +01:00
|
|
|
appointed:
|
|
|
|
- ussuri
|
2015-07-28 12:11:57 +02:00
|
|
|
service: DNS service
|
2016-03-18 09:46:05 -04:00
|
|
|
irc-channel: openstack-dns
|
2015-07-28 12:11:57 +02:00
|
|
|
mission: >
|
|
|
|
To provide scalable, on demand, self service access to authoritative DNS
|
|
|
|
services, in technology-agnostic manner.
|
|
|
|
url: https://wiki.openstack.org/wiki/Designate
|
|
|
|
deliverables:
|
|
|
|
designate:
|
|
|
|
repos:
|
|
|
|
- openstack/designate
|
2016-06-24 18:18:44 +01:00
|
|
|
designate-dashboard:
|
|
|
|
repos:
|
|
|
|
- openstack/designate-dashboard
|
2015-07-28 12:11:57 +02:00
|
|
|
designate-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/designate-specs
|
2016-04-04 17:14:06 +01:00
|
|
|
designate-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/designate-tempest-plugin
|
2015-07-28 12:11:57 +02:00
|
|
|
python-designateclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-designateclient
|
2015-10-27 11:04:42 +00:00
|
|
|
freezer:
|
2024-08-08 18:35:51 +02:00
|
|
|
leadership_type: distributed
|
|
|
|
liaisons:
|
|
|
|
release:
|
|
|
|
- name: Dmitriy Rabotyagov
|
|
|
|
irc: noonedeadpunk
|
|
|
|
email: noonedeadpunk@gmail.com
|
|
|
|
tact-sig:
|
|
|
|
- name: Dmitriy Rabotyagov
|
|
|
|
irc: noonedeadpunk
|
|
|
|
email: noonedeadpunk@gmail.com
|
|
|
|
security:
|
|
|
|
- name: Alvaro Soto
|
|
|
|
irc: khyr0n
|
|
|
|
email: alsotoes@gmail.com
|
|
|
|
tc-liaison:
|
|
|
|
- name: Ghanshyam Mann
|
|
|
|
irc: gmann
|
|
|
|
email: gmann@ghanshyammann.com
|
|
|
|
- name: Dmitriy Rabotyagov
|
|
|
|
irc: noonedeadpunk
|
|
|
|
email: noonedeadpunk@gmail.com
|
2018-08-13 13:40:21 -04:00
|
|
|
appointed:
|
|
|
|
- stein
|
2022-02-27 19:19:35 -06:00
|
|
|
- zed
|
2015-10-27 11:04:42 +00:00
|
|
|
irc-channel: openstack-freezer
|
2015-11-25 07:39:33 -06:00
|
|
|
service: Backup, Restore, and Disaster Recovery service
|
2015-10-27 11:04:42 +00:00
|
|
|
mission: >
|
2015-12-07 14:56:20 +00:00
|
|
|
To provide integrated tools for backing up and restoring cloud data in
|
|
|
|
multiple use cases, including disaster recovery. These resources include
|
|
|
|
file systems, server instances, volumes, and databases.
|
2015-10-27 11:04:42 +00:00
|
|
|
url: https://wiki.openstack.org/wiki/Freezer
|
|
|
|
deliverables:
|
|
|
|
freezer:
|
|
|
|
repos:
|
|
|
|
- openstack/freezer
|
|
|
|
- openstack/freezer-api
|
2016-03-02 09:26:13 +00:00
|
|
|
freezer-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2016-03-02 09:26:13 +00:00
|
|
|
repos:
|
|
|
|
- openstack/freezer-specs
|
2017-06-19 12:46:46 +01:00
|
|
|
freezer-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/freezer-tempest-plugin
|
2015-10-27 11:04:42 +00:00
|
|
|
freezer-web-ui:
|
|
|
|
repos:
|
|
|
|
- openstack/freezer-web-ui
|
2016-03-21 10:28:15 +00:00
|
|
|
python-freezerclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-freezerclient
|
2015-09-02 10:47:14 -05:00
|
|
|
glance:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2022-09-20 23:24:42 +09:00
|
|
|
name: Pranali Deore
|
|
|
|
irc: pdeore
|
|
|
|
email: pdeore@redhat.com
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-glance
|
|
|
|
service: Image service
|
2015-02-09 15:02:52 +01:00
|
|
|
mission: >
|
2016-08-11 12:28:53 +00:00
|
|
|
To provide services and associated libraries to store, browse, share,
|
|
|
|
distribute and manage bootable disk images, other data closely associated
|
|
|
|
with initializing compute resources, and metadata definitions.
|
2015-07-28 12:11:57 +02:00
|
|
|
url: https://wiki.openstack.org/wiki/Glance
|
2015-07-16 15:57:12 +02:00
|
|
|
deliverables:
|
2015-07-28 12:11:57 +02:00
|
|
|
glance:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/glance
|
|
|
|
glance-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/glance-specs
|
2020-12-18 06:11:50 +00:00
|
|
|
glance-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/glance-tempest-plugin
|
2016-07-18 14:37:09 -04:00
|
|
|
glance-store:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/glance_store
|
2024-07-26 12:10:23 -07:00
|
|
|
os-test-images:
|
|
|
|
repos:
|
|
|
|
- openstack/os-test-images
|
2015-07-28 12:11:57 +02:00
|
|
|
python-glanceclient:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/python-glanceclient
|
2015-09-02 10:47:14 -05:00
|
|
|
heat:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2023-02-16 11:16:15 +11:00
|
|
|
name: Takashi Kajinami
|
|
|
|
irc: tkajinam
|
2023-10-13 12:16:20 +09:00
|
|
|
email: kajinamit@oss.nttdata.com
|
2021-09-08 09:44:05 -05:00
|
|
|
appointed:
|
|
|
|
- yoga
|
2022-02-27 18:46:56 -06:00
|
|
|
- zed
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: heat
|
2015-07-14 12:15:21 -05:00
|
|
|
service: Orchestration service
|
2014-07-23 10:40:47 -05:00
|
|
|
mission: >
|
2015-07-28 12:11:57 +02:00
|
|
|
To orchestrate composite cloud applications using a declarative
|
2015-08-26 09:16:04 +08:00
|
|
|
template format through an OpenStack-native REST API.
|
2015-07-28 12:11:57 +02:00
|
|
|
url: https://wiki.openstack.org/wiki/Heat
|
2015-07-16 15:57:12 +02:00
|
|
|
deliverables:
|
2015-07-28 12:11:57 +02:00
|
|
|
heat:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/heat
|
2017-01-03 15:19:30 +01:00
|
|
|
heat-agents:
|
|
|
|
repos:
|
|
|
|
- openstack/heat-agents
|
2015-07-28 12:11:57 +02:00
|
|
|
heat-cfntools:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/heat-cfntools
|
2017-09-15 03:38:20 +09:00
|
|
|
heat-dashboard:
|
|
|
|
repos:
|
|
|
|
- openstack/heat-dashboard
|
2015-07-28 12:11:57 +02:00
|
|
|
heat-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/heat-specs
|
2017-11-14 13:42:28 +05:30
|
|
|
heat-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/heat-tempest-plugin
|
2015-07-28 12:11:57 +02:00
|
|
|
heat-templates:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/heat-templates
|
|
|
|
python-heatclient:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/python-heatclient
|
2023-02-22 14:11:55 +09:00
|
|
|
os-apply-config:
|
|
|
|
repos:
|
|
|
|
- openstack/os-apply-config
|
|
|
|
os-collect-config:
|
|
|
|
repos:
|
|
|
|
- openstack/os-collect-config
|
|
|
|
os-refresh-config:
|
|
|
|
repos:
|
|
|
|
- openstack/os-refresh-config
|
2023-12-19 23:50:09 +09:00
|
|
|
yaql:
|
|
|
|
repos:
|
|
|
|
- openstack/yaql
|
2015-09-02 10:47:14 -05:00
|
|
|
horizon:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2024-03-22 09:00:21 +09:00
|
|
|
name: Tatiana Ovchinnikova
|
|
|
|
irc: tmazur
|
|
|
|
email: t.v.ovtchinnikova@gmail.com
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-horizon
|
|
|
|
service: Dashboard
|
2014-07-23 10:40:47 -05:00
|
|
|
mission: >
|
2015-07-28 12:11:57 +02:00
|
|
|
To provide an extensible unified web based user interface for all
|
|
|
|
OpenStack services.
|
|
|
|
url: https://wiki.openstack.org/wiki/Horizon
|
2015-07-16 15:57:12 +02:00
|
|
|
deliverables:
|
2015-07-28 12:11:57 +02:00
|
|
|
horizon:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/horizon
|
2016-11-17 16:17:33 +09:00
|
|
|
ui-cookiecutter:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2016-11-17 16:17:33 +09:00
|
|
|
repos:
|
|
|
|
- openstack/ui-cookiecutter
|
2015-11-14 10:58:53 +00:00
|
|
|
xstatic-angular:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-angular
|
|
|
|
xstatic-angular-bootstrap:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-angular-bootstrap
|
2023-02-15 12:37:06 +05:30
|
|
|
xstatic-angular-fileupload:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-angular-fileupload
|
2015-11-14 10:58:53 +00:00
|
|
|
xstatic-angular-gettext:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-angular-gettext
|
|
|
|
xstatic-angular-lrdragndrop:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-angular-lrdragndrop
|
2018-03-12 21:03:35 +09:00
|
|
|
xstatic-angular-material:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-angular-material
|
|
|
|
xstatic-angular-notify:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-angular-notify
|
2015-11-14 10:58:53 +00:00
|
|
|
xstatic-angular-smart-table:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-angular-smart-table
|
2018-03-12 21:03:35 +09:00
|
|
|
xstatic-angular-uuid:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-angular-uuid
|
|
|
|
xstatic-angular-vis:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-angular-vis
|
2015-11-14 10:58:53 +00:00
|
|
|
xstatic-bootstrap-datepicker:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-bootstrap-datepicker
|
|
|
|
xstatic-bootstrap-scss:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-bootstrap-scss
|
|
|
|
xstatic-bootswatch:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-bootswatch
|
|
|
|
xstatic-d3:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-d3
|
|
|
|
xstatic-hogan:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-hogan
|
2018-03-12 21:03:35 +09:00
|
|
|
xstatic-filesaver:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-filesaver
|
2015-11-14 10:58:53 +00:00
|
|
|
xstatic-jasmine:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-jasmine
|
|
|
|
xstatic-jquery-migrate:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-jquery-migrate
|
|
|
|
xstatic-jquery.quicksearch:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-jquery.quicksearch
|
|
|
|
xstatic-jquery.tablesorter:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-jquery.tablesorter
|
2018-03-12 21:03:35 +09:00
|
|
|
xstatic-js-yaml:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-js-yaml
|
2015-11-14 10:58:53 +00:00
|
|
|
xstatic-jsencrypt:
|
|
|
|
repos:
|
2016-01-09 14:14:43 +00:00
|
|
|
- openstack/xstatic-jsencrypt
|
2018-03-12 21:03:35 +09:00
|
|
|
xstatic-json2yaml:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-json2yaml
|
2015-11-14 10:58:53 +00:00
|
|
|
xstatic-magic-search:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-magic-search
|
|
|
|
xstatic-mdi:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-mdi
|
|
|
|
xstatic-rickshaw:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-rickshaw
|
|
|
|
xstatic-roboto-fontface:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-roboto-fontface
|
|
|
|
xstatic-spin:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-spin
|
2015-09-02 10:47:14 -05:00
|
|
|
ironic:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2024-03-22 09:00:21 +09:00
|
|
|
name: Riccardo Pittau
|
|
|
|
irc: rpittau
|
|
|
|
email: elfosardo@gmail.com
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-ironic
|
2015-07-14 12:15:21 -05:00
|
|
|
service: Bare Metal service
|
2014-07-23 10:40:47 -05:00
|
|
|
mission: >
|
2015-07-28 12:11:57 +02:00
|
|
|
To produce an OpenStack service and associated libraries capable of
|
|
|
|
managing and provisioning physical machines, and to do this in a
|
|
|
|
security-aware and fault-tolerant manner.
|
|
|
|
url: https://wiki.openstack.org/wiki/Ironic
|
2015-07-16 15:57:12 +02:00
|
|
|
deliverables:
|
2015-07-28 12:11:57 +02:00
|
|
|
bifrost:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/bifrost
|
|
|
|
ironic:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/ironic
|
|
|
|
ironic-inspector:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/ironic-inspector
|
2015-11-24 16:02:05 +01:00
|
|
|
ironic-inspector-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-11-24 16:02:05 +01:00
|
|
|
repos:
|
|
|
|
- openstack/ironic-inspector-specs
|
2015-07-28 12:11:57 +02:00
|
|
|
ironic-lib:
|
|
|
|
repos:
|
|
|
|
- openstack/ironic-lib
|
2019-06-14 16:55:02 +02:00
|
|
|
ironic-prometheus-exporter:
|
|
|
|
repos:
|
|
|
|
- openstack/ironic-prometheus-exporter
|
2015-07-28 12:11:57 +02:00
|
|
|
ironic-python-agent:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/ironic-python-agent
|
2017-06-23 14:44:37 +02:00
|
|
|
ironic-python-agent-builder:
|
|
|
|
repos:
|
|
|
|
- openstack/ironic-python-agent-builder
|
2015-07-28 12:11:57 +02:00
|
|
|
ironic-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/ironic-specs
|
2016-11-15 10:03:49 -05:00
|
|
|
ironic-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/ironic-tempest-plugin
|
2015-12-08 16:35:52 +00:00
|
|
|
ironic-ui:
|
|
|
|
repos:
|
|
|
|
- openstack/ironic-ui
|
2018-09-12 18:07:02 +02:00
|
|
|
metalsmith:
|
|
|
|
repos:
|
|
|
|
- openstack/metalsmith
|
2016-11-15 15:32:38 -05:00
|
|
|
molteniron:
|
2021-01-07 18:39:12 +01:00
|
|
|
release-management: none
|
2016-11-15 15:32:38 -05:00
|
|
|
repos:
|
|
|
|
- openstack/molteniron
|
2017-02-28 21:44:19 +02:00
|
|
|
networking-baremetal:
|
|
|
|
repos:
|
|
|
|
- openstack/networking-baremetal
|
2017-11-21 17:19:06 +02:00
|
|
|
networking-generic-switch:
|
|
|
|
repos:
|
|
|
|
- openstack/networking-generic-switch
|
2015-07-28 12:11:57 +02:00
|
|
|
python-ironic-inspector-client:
|
|
|
|
repos:
|
|
|
|
- openstack/python-ironic-inspector-client
|
|
|
|
python-ironicclient:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/python-ironicclient
|
2017-03-13 18:03:20 +00:00
|
|
|
sushy:
|
|
|
|
repos:
|
|
|
|
- openstack/sushy
|
2017-03-28 13:33:01 +01:00
|
|
|
sushy-tools:
|
|
|
|
repos:
|
|
|
|
- openstack/sushy-tools
|
2018-09-06 12:27:56 +01:00
|
|
|
tenks:
|
|
|
|
repos:
|
|
|
|
- openstack/tenks
|
2016-04-19 17:33:37 +01:00
|
|
|
virtualbmc:
|
|
|
|
repos:
|
|
|
|
- openstack/virtualbmc
|
2023-03-02 13:03:09 -08:00
|
|
|
virtualpdu:
|
|
|
|
repos:
|
2023-04-06 20:20:57 -05:00
|
|
|
- openstack/virtualpdu
|
2015-09-02 10:47:14 -05:00
|
|
|
keystone:
|
2022-02-14 20:34:57 +00:00
|
|
|
ptl:
|
2022-09-22 13:26:03 -05:00
|
|
|
name: Dave Wilde
|
|
|
|
irc: d34dh0r53
|
|
|
|
email: dwilde@redhat.com
|
|
|
|
appointed:
|
2023-03-10 18:14:20 -06:00
|
|
|
- '2023.1'
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-keystone
|
2015-07-14 12:15:21 -05:00
|
|
|
service: Identity service
|
2014-07-23 10:40:47 -05:00
|
|
|
mission: >
|
2015-07-28 12:11:57 +02:00
|
|
|
To facilitate API client authentication, service discovery, distributed
|
|
|
|
multi-tenant authorization, and auditing.
|
|
|
|
url: https://wiki.openstack.org/wiki/Keystone
|
2015-07-16 15:57:12 +02:00
|
|
|
deliverables:
|
2015-07-28 12:11:57 +02:00
|
|
|
keystone:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/keystone
|
|
|
|
keystone-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/keystone-specs
|
2017-05-29 22:09:43 +02:00
|
|
|
keystone-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/keystone-tempest-plugin
|
2015-07-28 12:11:57 +02:00
|
|
|
keystoneauth:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/keystoneauth
|
|
|
|
keystonemiddleware:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/keystonemiddleware
|
|
|
|
pycadf:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/pycadf
|
|
|
|
python-keystoneclient:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/python-keystoneclient
|
2017-05-25 19:31:35 +00:00
|
|
|
ldappool:
|
|
|
|
repos:
|
|
|
|
- openstack/ldappool
|
2015-09-02 10:47:14 -05:00
|
|
|
kolla:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2021-09-08 20:32:02 +09:00
|
|
|
name: Michal Nasiadka
|
|
|
|
irc: mnasiadka
|
|
|
|
email: mnasiadka@gmail.com
|
2016-04-08 16:26:22 +02:00
|
|
|
irc-channel: openstack-kolla
|
2019-11-20 12:14:56 +01:00
|
|
|
service: Containerised deployment of OpenStack
|
2015-07-28 14:37:28 -07:00
|
|
|
mission: >
|
|
|
|
To provide production-ready containers and deployment tools for operating
|
|
|
|
OpenStack clouds.
|
|
|
|
url: https://wiki.openstack.org/wiki/Kolla
|
|
|
|
deliverables:
|
2021-11-25 14:45:24 +00:00
|
|
|
ansible-collection-kolla:
|
|
|
|
repos:
|
|
|
|
- openstack/ansible-collection-kolla
|
2015-07-28 14:37:28 -07:00
|
|
|
kolla:
|
|
|
|
repos:
|
|
|
|
- openstack/kolla
|
2016-11-12 16:56:05 -07:00
|
|
|
kolla-ansible:
|
|
|
|
repos:
|
|
|
|
- openstack/kolla-ansible
|
2019-07-05 09:15:27 +01:00
|
|
|
kayobe:
|
|
|
|
repos:
|
2019-09-16 11:41:18 -04:00
|
|
|
- openstack/kayobe
|
|
|
|
- openstack/kayobe-config
|
|
|
|
- openstack/kayobe-config-dev
|
2015-09-02 10:47:14 -05:00
|
|
|
magnum:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2022-09-20 23:24:42 +09:00
|
|
|
name: Jake Yip
|
|
|
|
irc: jakeyip
|
2024-02-15 21:16:21 +11:00
|
|
|
email: jake.yip@ardc.edu.au
|
2022-02-27 18:28:55 -06:00
|
|
|
appointed:
|
|
|
|
- zed
|
2015-11-04 10:38:36 +01:00
|
|
|
irc-channel: openstack-containers
|
2016-04-30 00:21:22 +00:00
|
|
|
service: Container Infrastructure Management service
|
2015-07-28 12:11:57 +02:00
|
|
|
mission: >
|
2016-04-30 00:21:22 +00:00
|
|
|
To provide a set of services for provisioning, scaling, and managing
|
|
|
|
container orchestration engines.
|
2015-07-28 12:11:57 +02:00
|
|
|
url: https://wiki.openstack.org/wiki/Magnum
|
|
|
|
deliverables:
|
|
|
|
magnum:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/magnum
|
2024-02-26 16:57:20 +01:00
|
|
|
magnum-capi-helm:
|
|
|
|
repos:
|
|
|
|
- openstack/magnum-capi-helm
|
2023-08-30 12:48:58 +12:00
|
|
|
magnum-capi-helm-charts:
|
|
|
|
repos:
|
|
|
|
- openstack/magnum-capi-helm-charts
|
2016-11-19 10:57:23 -06:00
|
|
|
magnum-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2016-11-19 10:57:23 -06:00
|
|
|
repos:
|
|
|
|
- openstack/magnum-specs
|
2017-09-06 16:32:16 +05:30
|
|
|
magnum-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/magnum-tempest-plugin
|
2015-07-28 12:11:57 +02:00
|
|
|
magnum-ui:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/magnum-ui
|
|
|
|
python-magnumclient:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/python-magnumclient
|
2015-09-02 10:47:14 -05:00
|
|
|
manila:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2022-02-24 16:35:09 -06:00
|
|
|
name: Carlos Silva
|
|
|
|
irc: carloss
|
|
|
|
email: ces.eduardo98@gmail.com
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-manila
|
2015-07-14 12:15:21 -05:00
|
|
|
service: Shared File Systems service
|
2015-07-28 12:11:57 +02:00
|
|
|
mission: >
|
|
|
|
To provide a set of services for management of shared file systems
|
|
|
|
in a multitenant cloud environment, similar to how OpenStack provides
|
|
|
|
for block-based storage management through the Cinder project.
|
|
|
|
url: https://wiki.openstack.org/wiki/Manila
|
|
|
|
deliverables:
|
|
|
|
manila:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/manila
|
|
|
|
manila-image-elements:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/manila-image-elements
|
2016-05-03 10:18:07 -04:00
|
|
|
manila-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2016-05-03 10:18:07 -04:00
|
|
|
repos:
|
|
|
|
- openstack/manila-specs
|
2017-09-27 16:41:40 +01:00
|
|
|
manila-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/manila-tempest-plugin
|
2016-11-03 12:48:18 -04:00
|
|
|
manila-test-image:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2016-11-03 12:48:18 -04:00
|
|
|
repos:
|
|
|
|
- openstack/manila-test-image
|
2016-06-06 08:48:55 -04:00
|
|
|
manila-ui:
|
|
|
|
repos:
|
|
|
|
- openstack/manila-ui
|
2015-07-28 12:11:57 +02:00
|
|
|
python-manilaclient:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/python-manilaclient
|
Masakari - Instances High Availability Service
Mission: Provide instances high availability service for OpenStack
clouds by automatically recover the instances from failures. We are
mainly focus on solve the problems addressed in High Availability for
Virtual Machines Development Proposal [1].
Project History: Engineers from NTT started this project on github [2]
in Jul 2015 under Apache License 2.0. Newton design Summit in Austin,
Masakari team and OpenStack HA team discussed how to implement this
feature in OpenStack [3]. As Masakari team agreed to re-architecte
masakari, we started a new project openstack/masakari [4] and
re-architect and rebuild the [2] with OpenStack standards. We
continually discussed this topic in OpenStack HA team [5] and we present
our plane to moving into a converged upstream solution for-instances
high-availability in Boston Summit [6]. Masakari is an essential
component of this solution.
Satisfaction of new projects requirements:
Masakari projects follow the 4 opens,
* Open Source: All Masakari source codes, specifications, and
documentations are use Apache License 2.0. All library dependencies
allow for unrestricted distribution and deployment.
* Open Community: We holds weekly IRC meetings in #openstack-meeting and
weekly agenda, meeting time, and link for previous meeting logs are in
our wiki page[7]. We are also available in #openstack-masakari IRC
channel for daily discussion. All the channels are open for anyone to
interact with Masakari team.
* Open Development: Masakari uses Gerrit for code reviews and Launchpad
for bugs and blueprints tracking [8]. Project core team checks all the
changes before merging them. The project is integrated with the
Openstack CI, and runs pep8, py27, and py35 jobs. Masakari is
implemented using libraries and technologies adopted by other existing
OpenStack projects.
* Open Design: The Masakari team has been publicly discussed project
direction, issues, work items and priorities in etherpad [9-11] and in
the weekly IRC meetings. We use masakari-specs [12] to discuss design
specifications for all Masakari projects. We also use openstack-dev ML
with prefix [masakari] for project discussion.
Masakari is compatible with OpenStack APIs, and uses Keystone middleware
to integrate with OpenStack Identity.
The Masakari team is happy to participate in any goals specified by the
TC, and meet any policies that the TC requires all projects to meet.
[1] http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html
[2] https://github.com/ntt-sic/masakari
[3] https://etherpad.openstack.org/p/newton-instance-ha
[4] https://github.com/openstack/masakari
[5] https://wiki.openstack.org/wiki/Meetings/HATeamMeeting
[6] https://www.openstack.org/videos/boston-2017/high-availability-for-instances-moving-to-a-converged-upstream-solution
[7] https://wiki.openstack.org/wiki/Meetings/Masakari
[8] https://wiki.openstack.org/wiki/Masakari#Code_and_Bug.2FFeature_trackers
[9] https://etherpad.openstack.org/p/ocata-priorities-masakari
[10] https://etherpad.openstack.org/p/masakari-pike-workitems
[11] https://etherpad.openstack.org/p/masakari-queens-workitems
[12] https://review.openstack.org/#/q/project:openstack/masakari-specs
Change-Id: I99f7bfbf785f79d235a51ca8acc8810c4805694c
2017-09-02 01:09:20 +09:00
|
|
|
masakari:
|
|
|
|
ptl:
|
2023-02-16 11:16:15 +11:00
|
|
|
name: sam sue
|
|
|
|
irc: No nick supplied
|
|
|
|
email: suzhengwei@inspur.com
|
2020-04-03 11:48:30 -05:00
|
|
|
appointed:
|
2022-01-13 08:18:01 +00:00
|
|
|
- yoga
|
2022-02-27 18:22:00 -06:00
|
|
|
- zed
|
2023-03-10 18:14:20 -06:00
|
|
|
- '2023.1'
|
Masakari - Instances High Availability Service
Mission: Provide instances high availability service for OpenStack
clouds by automatically recover the instances from failures. We are
mainly focus on solve the problems addressed in High Availability for
Virtual Machines Development Proposal [1].
Project History: Engineers from NTT started this project on github [2]
in Jul 2015 under Apache License 2.0. Newton design Summit in Austin,
Masakari team and OpenStack HA team discussed how to implement this
feature in OpenStack [3]. As Masakari team agreed to re-architecte
masakari, we started a new project openstack/masakari [4] and
re-architect and rebuild the [2] with OpenStack standards. We
continually discussed this topic in OpenStack HA team [5] and we present
our plane to moving into a converged upstream solution for-instances
high-availability in Boston Summit [6]. Masakari is an essential
component of this solution.
Satisfaction of new projects requirements:
Masakari projects follow the 4 opens,
* Open Source: All Masakari source codes, specifications, and
documentations are use Apache License 2.0. All library dependencies
allow for unrestricted distribution and deployment.
* Open Community: We holds weekly IRC meetings in #openstack-meeting and
weekly agenda, meeting time, and link for previous meeting logs are in
our wiki page[7]. We are also available in #openstack-masakari IRC
channel for daily discussion. All the channels are open for anyone to
interact with Masakari team.
* Open Development: Masakari uses Gerrit for code reviews and Launchpad
for bugs and blueprints tracking [8]. Project core team checks all the
changes before merging them. The project is integrated with the
Openstack CI, and runs pep8, py27, and py35 jobs. Masakari is
implemented using libraries and technologies adopted by other existing
OpenStack projects.
* Open Design: The Masakari team has been publicly discussed project
direction, issues, work items and priorities in etherpad [9-11] and in
the weekly IRC meetings. We use masakari-specs [12] to discuss design
specifications for all Masakari projects. We also use openstack-dev ML
with prefix [masakari] for project discussion.
Masakari is compatible with OpenStack APIs, and uses Keystone middleware
to integrate with OpenStack Identity.
The Masakari team is happy to participate in any goals specified by the
TC, and meet any policies that the TC requires all projects to meet.
[1] http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html
[2] https://github.com/ntt-sic/masakari
[3] https://etherpad.openstack.org/p/newton-instance-ha
[4] https://github.com/openstack/masakari
[5] https://wiki.openstack.org/wiki/Meetings/HATeamMeeting
[6] https://www.openstack.org/videos/boston-2017/high-availability-for-instances-moving-to-a-converged-upstream-solution
[7] https://wiki.openstack.org/wiki/Meetings/Masakari
[8] https://wiki.openstack.org/wiki/Masakari#Code_and_Bug.2FFeature_trackers
[9] https://etherpad.openstack.org/p/ocata-priorities-masakari
[10] https://etherpad.openstack.org/p/masakari-pike-workitems
[11] https://etherpad.openstack.org/p/masakari-queens-workitems
[12] https://review.openstack.org/#/q/project:openstack/masakari-specs
Change-Id: I99f7bfbf785f79d235a51ca8acc8810c4805694c
2017-09-02 01:09:20 +09:00
|
|
|
irc-channel: openstack-masakari
|
2019-11-20 12:13:49 +01:00
|
|
|
service: Instances High Availability service
|
Masakari - Instances High Availability Service
Mission: Provide instances high availability service for OpenStack
clouds by automatically recover the instances from failures. We are
mainly focus on solve the problems addressed in High Availability for
Virtual Machines Development Proposal [1].
Project History: Engineers from NTT started this project on github [2]
in Jul 2015 under Apache License 2.0. Newton design Summit in Austin,
Masakari team and OpenStack HA team discussed how to implement this
feature in OpenStack [3]. As Masakari team agreed to re-architecte
masakari, we started a new project openstack/masakari [4] and
re-architect and rebuild the [2] with OpenStack standards. We
continually discussed this topic in OpenStack HA team [5] and we present
our plane to moving into a converged upstream solution for-instances
high-availability in Boston Summit [6]. Masakari is an essential
component of this solution.
Satisfaction of new projects requirements:
Masakari projects follow the 4 opens,
* Open Source: All Masakari source codes, specifications, and
documentations are use Apache License 2.0. All library dependencies
allow for unrestricted distribution and deployment.
* Open Community: We holds weekly IRC meetings in #openstack-meeting and
weekly agenda, meeting time, and link for previous meeting logs are in
our wiki page[7]. We are also available in #openstack-masakari IRC
channel for daily discussion. All the channels are open for anyone to
interact with Masakari team.
* Open Development: Masakari uses Gerrit for code reviews and Launchpad
for bugs and blueprints tracking [8]. Project core team checks all the
changes before merging them. The project is integrated with the
Openstack CI, and runs pep8, py27, and py35 jobs. Masakari is
implemented using libraries and technologies adopted by other existing
OpenStack projects.
* Open Design: The Masakari team has been publicly discussed project
direction, issues, work items and priorities in etherpad [9-11] and in
the weekly IRC meetings. We use masakari-specs [12] to discuss design
specifications for all Masakari projects. We also use openstack-dev ML
with prefix [masakari] for project discussion.
Masakari is compatible with OpenStack APIs, and uses Keystone middleware
to integrate with OpenStack Identity.
The Masakari team is happy to participate in any goals specified by the
TC, and meet any policies that the TC requires all projects to meet.
[1] http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html
[2] https://github.com/ntt-sic/masakari
[3] https://etherpad.openstack.org/p/newton-instance-ha
[4] https://github.com/openstack/masakari
[5] https://wiki.openstack.org/wiki/Meetings/HATeamMeeting
[6] https://www.openstack.org/videos/boston-2017/high-availability-for-instances-moving-to-a-converged-upstream-solution
[7] https://wiki.openstack.org/wiki/Meetings/Masakari
[8] https://wiki.openstack.org/wiki/Masakari#Code_and_Bug.2FFeature_trackers
[9] https://etherpad.openstack.org/p/ocata-priorities-masakari
[10] https://etherpad.openstack.org/p/masakari-pike-workitems
[11] https://etherpad.openstack.org/p/masakari-queens-workitems
[12] https://review.openstack.org/#/q/project:openstack/masakari-specs
Change-Id: I99f7bfbf785f79d235a51ca8acc8810c4805694c
2017-09-02 01:09:20 +09:00
|
|
|
mission: >
|
|
|
|
Provide instances high availability service for OpenStack
|
2017-09-20 12:42:27 -04:00
|
|
|
clouds by automatically recovering the instances from failures.
|
Masakari - Instances High Availability Service
Mission: Provide instances high availability service for OpenStack
clouds by automatically recover the instances from failures. We are
mainly focus on solve the problems addressed in High Availability for
Virtual Machines Development Proposal [1].
Project History: Engineers from NTT started this project on github [2]
in Jul 2015 under Apache License 2.0. Newton design Summit in Austin,
Masakari team and OpenStack HA team discussed how to implement this
feature in OpenStack [3]. As Masakari team agreed to re-architecte
masakari, we started a new project openstack/masakari [4] and
re-architect and rebuild the [2] with OpenStack standards. We
continually discussed this topic in OpenStack HA team [5] and we present
our plane to moving into a converged upstream solution for-instances
high-availability in Boston Summit [6]. Masakari is an essential
component of this solution.
Satisfaction of new projects requirements:
Masakari projects follow the 4 opens,
* Open Source: All Masakari source codes, specifications, and
documentations are use Apache License 2.0. All library dependencies
allow for unrestricted distribution and deployment.
* Open Community: We holds weekly IRC meetings in #openstack-meeting and
weekly agenda, meeting time, and link for previous meeting logs are in
our wiki page[7]. We are also available in #openstack-masakari IRC
channel for daily discussion. All the channels are open for anyone to
interact with Masakari team.
* Open Development: Masakari uses Gerrit for code reviews and Launchpad
for bugs and blueprints tracking [8]. Project core team checks all the
changes before merging them. The project is integrated with the
Openstack CI, and runs pep8, py27, and py35 jobs. Masakari is
implemented using libraries and technologies adopted by other existing
OpenStack projects.
* Open Design: The Masakari team has been publicly discussed project
direction, issues, work items and priorities in etherpad [9-11] and in
the weekly IRC meetings. We use masakari-specs [12] to discuss design
specifications for all Masakari projects. We also use openstack-dev ML
with prefix [masakari] for project discussion.
Masakari is compatible with OpenStack APIs, and uses Keystone middleware
to integrate with OpenStack Identity.
The Masakari team is happy to participate in any goals specified by the
TC, and meet any policies that the TC requires all projects to meet.
[1] http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html
[2] https://github.com/ntt-sic/masakari
[3] https://etherpad.openstack.org/p/newton-instance-ha
[4] https://github.com/openstack/masakari
[5] https://wiki.openstack.org/wiki/Meetings/HATeamMeeting
[6] https://www.openstack.org/videos/boston-2017/high-availability-for-instances-moving-to-a-converged-upstream-solution
[7] https://wiki.openstack.org/wiki/Meetings/Masakari
[8] https://wiki.openstack.org/wiki/Masakari#Code_and_Bug.2FFeature_trackers
[9] https://etherpad.openstack.org/p/ocata-priorities-masakari
[10] https://etherpad.openstack.org/p/masakari-pike-workitems
[11] https://etherpad.openstack.org/p/masakari-queens-workitems
[12] https://review.openstack.org/#/q/project:openstack/masakari-specs
Change-Id: I99f7bfbf785f79d235a51ca8acc8810c4805694c
2017-09-02 01:09:20 +09:00
|
|
|
url: https://wiki.openstack.org/wiki/Masakari
|
|
|
|
deliverables:
|
|
|
|
masakari:
|
|
|
|
repos:
|
|
|
|
- openstack/masakari
|
|
|
|
masakari-monitors:
|
|
|
|
repos:
|
|
|
|
- openstack/masakari-monitors
|
|
|
|
masakari-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
Masakari - Instances High Availability Service
Mission: Provide instances high availability service for OpenStack
clouds by automatically recover the instances from failures. We are
mainly focus on solve the problems addressed in High Availability for
Virtual Machines Development Proposal [1].
Project History: Engineers from NTT started this project on github [2]
in Jul 2015 under Apache License 2.0. Newton design Summit in Austin,
Masakari team and OpenStack HA team discussed how to implement this
feature in OpenStack [3]. As Masakari team agreed to re-architecte
masakari, we started a new project openstack/masakari [4] and
re-architect and rebuild the [2] with OpenStack standards. We
continually discussed this topic in OpenStack HA team [5] and we present
our plane to moving into a converged upstream solution for-instances
high-availability in Boston Summit [6]. Masakari is an essential
component of this solution.
Satisfaction of new projects requirements:
Masakari projects follow the 4 opens,
* Open Source: All Masakari source codes, specifications, and
documentations are use Apache License 2.0. All library dependencies
allow for unrestricted distribution and deployment.
* Open Community: We holds weekly IRC meetings in #openstack-meeting and
weekly agenda, meeting time, and link for previous meeting logs are in
our wiki page[7]. We are also available in #openstack-masakari IRC
channel for daily discussion. All the channels are open for anyone to
interact with Masakari team.
* Open Development: Masakari uses Gerrit for code reviews and Launchpad
for bugs and blueprints tracking [8]. Project core team checks all the
changes before merging them. The project is integrated with the
Openstack CI, and runs pep8, py27, and py35 jobs. Masakari is
implemented using libraries and technologies adopted by other existing
OpenStack projects.
* Open Design: The Masakari team has been publicly discussed project
direction, issues, work items and priorities in etherpad [9-11] and in
the weekly IRC meetings. We use masakari-specs [12] to discuss design
specifications for all Masakari projects. We also use openstack-dev ML
with prefix [masakari] for project discussion.
Masakari is compatible with OpenStack APIs, and uses Keystone middleware
to integrate with OpenStack Identity.
The Masakari team is happy to participate in any goals specified by the
TC, and meet any policies that the TC requires all projects to meet.
[1] http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html
[2] https://github.com/ntt-sic/masakari
[3] https://etherpad.openstack.org/p/newton-instance-ha
[4] https://github.com/openstack/masakari
[5] https://wiki.openstack.org/wiki/Meetings/HATeamMeeting
[6] https://www.openstack.org/videos/boston-2017/high-availability-for-instances-moving-to-a-converged-upstream-solution
[7] https://wiki.openstack.org/wiki/Meetings/Masakari
[8] https://wiki.openstack.org/wiki/Masakari#Code_and_Bug.2FFeature_trackers
[9] https://etherpad.openstack.org/p/ocata-priorities-masakari
[10] https://etherpad.openstack.org/p/masakari-pike-workitems
[11] https://etherpad.openstack.org/p/masakari-queens-workitems
[12] https://review.openstack.org/#/q/project:openstack/masakari-specs
Change-Id: I99f7bfbf785f79d235a51ca8acc8810c4805694c
2017-09-02 01:09:20 +09:00
|
|
|
repos:
|
|
|
|
- openstack/masakari-specs
|
|
|
|
python-masakariclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-masakariclient
|
2017-10-31 13:32:48 +05:30
|
|
|
masakari-dashboard:
|
|
|
|
repos:
|
|
|
|
- openstack/masakari-dashboard
|
2015-09-02 10:47:14 -05:00
|
|
|
mistral:
|
2023-02-09 13:29:23 +00:00
|
|
|
ptl:
|
2024-09-04 13:24:13 +00:00
|
|
|
name: Axel Vanzaghi
|
|
|
|
irc: avanzaghi
|
|
|
|
email: avanzaghi.osf@axellink.fr
|
2023-12-05 09:59:07 +00:00
|
|
|
appointed:
|
|
|
|
- '2024.1'
|
2024-09-04 13:24:13 +00:00
|
|
|
- '2025.1'
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-mistral
|
|
|
|
service: Workflow service
|
|
|
|
mission: >
|
|
|
|
Provide a simple YAML-based language to write workflows (tasks and
|
|
|
|
transition rules) and a service that allows to upload them, modify, run
|
|
|
|
them at scale and in a highly available manner, manage and monitor
|
|
|
|
workflow execution state and state of individual tasks.
|
|
|
|
url: https://wiki.openstack.org/wiki/Mistral
|
|
|
|
deliverables:
|
|
|
|
mistral:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/mistral
|
2019-12-23 13:36:30 +09:00
|
|
|
mistral-dashboard:
|
|
|
|
repos:
|
|
|
|
- openstack/mistral-dashboard
|
2015-11-09 10:06:37 +08:00
|
|
|
mistral-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-11-09 10:06:37 +08:00
|
|
|
repos:
|
|
|
|
- openstack/mistral-specs
|
2017-12-02 18:27:23 +05:30
|
|
|
mistral-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/mistral-tempest-plugin
|
2015-07-28 12:11:57 +02:00
|
|
|
python-mistralclient:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/python-mistralclient
|
2017-04-07 13:10:28 +12:00
|
|
|
mistral-lib:
|
|
|
|
repos:
|
|
|
|
- openstack/mistral-lib
|
2020-02-21 11:37:11 +07:00
|
|
|
mistral-extra:
|
|
|
|
repos:
|
|
|
|
- openstack/mistral-extra
|
Adding Monasca to OpenStack
Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant
monitoring-as-a-service solution for metrics. Metrics can be published to the
Monasca API, stored and queried. Alarms can be created and notifications, such
as email, can be sent when alarms transition state. Support for complex event
processing and logging is in progress. Monasca builds an extensible platform
for advanced monitoring services that can be used by both operators and tenants
to gain operational insight and visibilty, ensuring availabilty and stability.
All code has been developed under an Apache 2.0 license and has no restrictions
on distribution or deployment. All Monasca code is submitted and reviewed
through OpenStack Gerrit [1]. The Monasca project maintains a core team that
approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10].
Monasca integrates in with several OpenStack projects and services. The Monasca
API uses Keystone for authentication and multi-tenancy. Oslo libraries are used
by all components where applicable. Keystone middleware is used by the Monasca
API. The Monasca project is in the process of integrating with Ceilometer by
using the Ceilometer data collection pipeline as well as the Ceilometer API via
a Ceilometer to Monasca storage driver, which will enable Monasca to consume
OpenStack notifications from other OpenStack services [5]. A monitoring panel
has been developer for Horizon. An integration with Heat for auto-scaling
support is under active development.
Monasca has been running weekly meetings from the start of the project.
Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants
to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections
will be held to elect new PTLs and core members.
Monasca was initially discussed at the Atlanta Summit. The first Monasca
mid-cycle meetup was held in August 2014 at which three companies attended. At
the Paris Summit a session on Monsca was held. In addition, at the Paris
Summit, there as a design summit session held to discuss areas for collaboration
between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on
August 7-8, 2015, and included six companies [9]. Monasca is planning on
holding Monasca specific sessions at the Tokyo Summit as well as joint sessions
with other OpenStack projects. Monasca is interested in developing integrations
with Ceilometer, Heat, Mistral, Congress and others. There have been several
local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San
Francisco, CA.
Monasca has an extensive set of documentation. Overall documentation and links
to documentation are at the Monasca Wiki [2]. The Monasca API is documented
[3]. The optional Monasca Agent is documented [4].
Monasca has several official deployment solutions available. Ansible roles are
available [6]. Puppet modules are available via the openstack organization
[7]. Monasca also has a turn-key development environment based on Vagrant,
Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca
plugin [11] for DevStack. Tempest tests for Monasca [12] are also available.
Monasca is continually deployed to test and production environments off of
master branch and maintains a very high level of quality. The first major
release of Monasca was tagged for Kilo. The second major release of Monasca
will be tagged for Liberty.
[1]: https://review.openstack.org/#/q/status:open+monasca,n,z
[2]: https://wiki.openstack.org/wiki/Monasca
[3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/
monasca-api-spec.md
[4]: https://git.openstack.org/openstack/monasca-agent
[5]: https://git.openstack.org/openstack/monasca-ceilometer
[6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca
[7]: https://git.openstack.org/openstack/puppet-monasca
[8]: https://git.openstack.org/openstack/monasca-vagrant
[9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle
[10]: https://bugs.launchpad.net/monasca
[11]: https://git.openstack.org/openstack/monasca-api/devstack
[12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests
Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
|
|
|
monasca:
|
|
|
|
ptl:
|
2024-03-22 09:00:21 +09:00
|
|
|
name: Hasan Acar
|
2023-09-21 14:22:53 +10:00
|
|
|
irc: No nick supplied
|
2024-03-22 09:00:21 +09:00
|
|
|
email: hasan.acar@tubitak.gov.tr
|
2021-03-11 13:19:06 -06:00
|
|
|
appointed:
|
|
|
|
- xena
|
2021-09-08 09:47:56 -05:00
|
|
|
- yoga
|
2022-02-27 19:07:34 -06:00
|
|
|
- zed
|
2023-05-05 06:14:54 +00:00
|
|
|
- '2023.2'
|
Adding Monasca to OpenStack
Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant
monitoring-as-a-service solution for metrics. Metrics can be published to the
Monasca API, stored and queried. Alarms can be created and notifications, such
as email, can be sent when alarms transition state. Support for complex event
processing and logging is in progress. Monasca builds an extensible platform
for advanced monitoring services that can be used by both operators and tenants
to gain operational insight and visibilty, ensuring availabilty and stability.
All code has been developed under an Apache 2.0 license and has no restrictions
on distribution or deployment. All Monasca code is submitted and reviewed
through OpenStack Gerrit [1]. The Monasca project maintains a core team that
approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10].
Monasca integrates in with several OpenStack projects and services. The Monasca
API uses Keystone for authentication and multi-tenancy. Oslo libraries are used
by all components where applicable. Keystone middleware is used by the Monasca
API. The Monasca project is in the process of integrating with Ceilometer by
using the Ceilometer data collection pipeline as well as the Ceilometer API via
a Ceilometer to Monasca storage driver, which will enable Monasca to consume
OpenStack notifications from other OpenStack services [5]. A monitoring panel
has been developer for Horizon. An integration with Heat for auto-scaling
support is under active development.
Monasca has been running weekly meetings from the start of the project.
Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants
to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections
will be held to elect new PTLs and core members.
Monasca was initially discussed at the Atlanta Summit. The first Monasca
mid-cycle meetup was held in August 2014 at which three companies attended. At
the Paris Summit a session on Monsca was held. In addition, at the Paris
Summit, there as a design summit session held to discuss areas for collaboration
between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on
August 7-8, 2015, and included six companies [9]. Monasca is planning on
holding Monasca specific sessions at the Tokyo Summit as well as joint sessions
with other OpenStack projects. Monasca is interested in developing integrations
with Ceilometer, Heat, Mistral, Congress and others. There have been several
local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San
Francisco, CA.
Monasca has an extensive set of documentation. Overall documentation and links
to documentation are at the Monasca Wiki [2]. The Monasca API is documented
[3]. The optional Monasca Agent is documented [4].
Monasca has several official deployment solutions available. Ansible roles are
available [6]. Puppet modules are available via the openstack organization
[7]. Monasca also has a turn-key development environment based on Vagrant,
Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca
plugin [11] for DevStack. Tempest tests for Monasca [12] are also available.
Monasca is continually deployed to test and production environments off of
master branch and maintains a very high level of quality. The first major
release of Monasca was tagged for Kilo. The second major release of Monasca
will be tagged for Liberty.
[1]: https://review.openstack.org/#/q/status:open+monasca,n,z
[2]: https://wiki.openstack.org/wiki/Monasca
[3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/
monasca-api-spec.md
[4]: https://git.openstack.org/openstack/monasca-agent
[5]: https://git.openstack.org/openstack/monasca-ceilometer
[6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca
[7]: https://git.openstack.org/openstack/puppet-monasca
[8]: https://git.openstack.org/openstack/monasca-vagrant
[9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle
[10]: https://bugs.launchpad.net/monasca
[11]: https://git.openstack.org/openstack/monasca-api/devstack
[12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests
Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
|
|
|
irc-channel: openstack-monasca
|
|
|
|
service: Monitoring
|
|
|
|
mission: >
|
|
|
|
To provide a multi-tenant, highly scalable, performant, fault-tolerant
|
|
|
|
monitoring-as-a-service solution for metrics, complex event processing
|
|
|
|
and logging. To build an extensible platform for advanced monitoring
|
|
|
|
services that can be used by both operators and tenants to gain
|
|
|
|
operational insight and visibility, ensuring availability and stability.
|
|
|
|
url: https://wiki.openstack.org/wiki/Monasca
|
|
|
|
deliverables:
|
2016-02-01 13:58:26 -07:00
|
|
|
monasca-api:
|
Adding Monasca to OpenStack
Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant
monitoring-as-a-service solution for metrics. Metrics can be published to the
Monasca API, stored and queried. Alarms can be created and notifications, such
as email, can be sent when alarms transition state. Support for complex event
processing and logging is in progress. Monasca builds an extensible platform
for advanced monitoring services that can be used by both operators and tenants
to gain operational insight and visibilty, ensuring availabilty and stability.
All code has been developed under an Apache 2.0 license and has no restrictions
on distribution or deployment. All Monasca code is submitted and reviewed
through OpenStack Gerrit [1]. The Monasca project maintains a core team that
approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10].
Monasca integrates in with several OpenStack projects and services. The Monasca
API uses Keystone for authentication and multi-tenancy. Oslo libraries are used
by all components where applicable. Keystone middleware is used by the Monasca
API. The Monasca project is in the process of integrating with Ceilometer by
using the Ceilometer data collection pipeline as well as the Ceilometer API via
a Ceilometer to Monasca storage driver, which will enable Monasca to consume
OpenStack notifications from other OpenStack services [5]. A monitoring panel
has been developer for Horizon. An integration with Heat for auto-scaling
support is under active development.
Monasca has been running weekly meetings from the start of the project.
Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants
to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections
will be held to elect new PTLs and core members.
Monasca was initially discussed at the Atlanta Summit. The first Monasca
mid-cycle meetup was held in August 2014 at which three companies attended. At
the Paris Summit a session on Monsca was held. In addition, at the Paris
Summit, there as a design summit session held to discuss areas for collaboration
between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on
August 7-8, 2015, and included six companies [9]. Monasca is planning on
holding Monasca specific sessions at the Tokyo Summit as well as joint sessions
with other OpenStack projects. Monasca is interested in developing integrations
with Ceilometer, Heat, Mistral, Congress and others. There have been several
local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San
Francisco, CA.
Monasca has an extensive set of documentation. Overall documentation and links
to documentation are at the Monasca Wiki [2]. The Monasca API is documented
[3]. The optional Monasca Agent is documented [4].
Monasca has several official deployment solutions available. Ansible roles are
available [6]. Puppet modules are available via the openstack organization
[7]. Monasca also has a turn-key development environment based on Vagrant,
Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca
plugin [11] for DevStack. Tempest tests for Monasca [12] are also available.
Monasca is continually deployed to test and production environments off of
master branch and maintains a very high level of quality. The first major
release of Monasca was tagged for Kilo. The second major release of Monasca
will be tagged for Liberty.
[1]: https://review.openstack.org/#/q/status:open+monasca,n,z
[2]: https://wiki.openstack.org/wiki/Monasca
[3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/
monasca-api-spec.md
[4]: https://git.openstack.org/openstack/monasca-agent
[5]: https://git.openstack.org/openstack/monasca-ceilometer
[6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca
[7]: https://git.openstack.org/openstack/puppet-monasca
[8]: https://git.openstack.org/openstack/monasca-vagrant
[9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle
[10]: https://bugs.launchpad.net/monasca
[11]: https://git.openstack.org/openstack/monasca-api/devstack
[12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests
Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
|
|
|
repos:
|
|
|
|
- openstack/monasca-api
|
2016-02-01 13:58:26 -07:00
|
|
|
monasca-log-api:
|
2021-03-29 09:08:03 -05:00
|
|
|
deprecated: ussuri
|
|
|
|
release-management: deprecated
|
2016-02-01 13:58:26 -07:00
|
|
|
repos:
|
2015-12-03 09:10:56 +01:00
|
|
|
- openstack/monasca-log-api
|
2017-08-07 10:49:21 +02:00
|
|
|
monasca-events-api:
|
|
|
|
repos:
|
|
|
|
- openstack/monasca-events-api
|
2017-08-31 13:39:43 +02:00
|
|
|
monasca-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2017-08-31 13:39:43 +02:00
|
|
|
repos:
|
|
|
|
- openstack/monasca-specs
|
2016-02-01 13:58:26 -07:00
|
|
|
monasca-notification:
|
|
|
|
repos:
|
|
|
|
- openstack/monasca-notification
|
|
|
|
monasca-persister:
|
|
|
|
repos:
|
Adding Monasca to OpenStack
Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant
monitoring-as-a-service solution for metrics. Metrics can be published to the
Monasca API, stored and queried. Alarms can be created and notifications, such
as email, can be sent when alarms transition state. Support for complex event
processing and logging is in progress. Monasca builds an extensible platform
for advanced monitoring services that can be used by both operators and tenants
to gain operational insight and visibilty, ensuring availabilty and stability.
All code has been developed under an Apache 2.0 license and has no restrictions
on distribution or deployment. All Monasca code is submitted and reviewed
through OpenStack Gerrit [1]. The Monasca project maintains a core team that
approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10].
Monasca integrates in with several OpenStack projects and services. The Monasca
API uses Keystone for authentication and multi-tenancy. Oslo libraries are used
by all components where applicable. Keystone middleware is used by the Monasca
API. The Monasca project is in the process of integrating with Ceilometer by
using the Ceilometer data collection pipeline as well as the Ceilometer API via
a Ceilometer to Monasca storage driver, which will enable Monasca to consume
OpenStack notifications from other OpenStack services [5]. A monitoring panel
has been developer for Horizon. An integration with Heat for auto-scaling
support is under active development.
Monasca has been running weekly meetings from the start of the project.
Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants
to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections
will be held to elect new PTLs and core members.
Monasca was initially discussed at the Atlanta Summit. The first Monasca
mid-cycle meetup was held in August 2014 at which three companies attended. At
the Paris Summit a session on Monsca was held. In addition, at the Paris
Summit, there as a design summit session held to discuss areas for collaboration
between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on
August 7-8, 2015, and included six companies [9]. Monasca is planning on
holding Monasca specific sessions at the Tokyo Summit as well as joint sessions
with other OpenStack projects. Monasca is interested in developing integrations
with Ceilometer, Heat, Mistral, Congress and others. There have been several
local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San
Francisco, CA.
Monasca has an extensive set of documentation. Overall documentation and links
to documentation are at the Monasca Wiki [2]. The Monasca API is documented
[3]. The optional Monasca Agent is documented [4].
Monasca has several official deployment solutions available. Ansible roles are
available [6]. Puppet modules are available via the openstack organization
[7]. Monasca also has a turn-key development environment based on Vagrant,
Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca
plugin [11] for DevStack. Tempest tests for Monasca [12] are also available.
Monasca is continually deployed to test and production environments off of
master branch and maintains a very high level of quality. The first major
release of Monasca was tagged for Kilo. The second major release of Monasca
will be tagged for Liberty.
[1]: https://review.openstack.org/#/q/status:open+monasca,n,z
[2]: https://wiki.openstack.org/wiki/Monasca
[3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/
monasca-api-spec.md
[4]: https://git.openstack.org/openstack/monasca-agent
[5]: https://git.openstack.org/openstack/monasca-ceilometer
[6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca
[7]: https://git.openstack.org/openstack/puppet-monasca
[8]: https://git.openstack.org/openstack/monasca-vagrant
[9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle
[10]: https://bugs.launchpad.net/monasca
[11]: https://git.openstack.org/openstack/monasca-api/devstack
[12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests
Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
|
|
|
- openstack/monasca-persister
|
2017-12-09 19:10:29 +05:30
|
|
|
monasca-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/monasca-tempest-plugin
|
2016-02-01 13:58:26 -07:00
|
|
|
monasca-thresh:
|
|
|
|
repos:
|
Adding Monasca to OpenStack
Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant
monitoring-as-a-service solution for metrics. Metrics can be published to the
Monasca API, stored and queried. Alarms can be created and notifications, such
as email, can be sent when alarms transition state. Support for complex event
processing and logging is in progress. Monasca builds an extensible platform
for advanced monitoring services that can be used by both operators and tenants
to gain operational insight and visibilty, ensuring availabilty and stability.
All code has been developed under an Apache 2.0 license and has no restrictions
on distribution or deployment. All Monasca code is submitted and reviewed
through OpenStack Gerrit [1]. The Monasca project maintains a core team that
approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10].
Monasca integrates in with several OpenStack projects and services. The Monasca
API uses Keystone for authentication and multi-tenancy. Oslo libraries are used
by all components where applicable. Keystone middleware is used by the Monasca
API. The Monasca project is in the process of integrating with Ceilometer by
using the Ceilometer data collection pipeline as well as the Ceilometer API via
a Ceilometer to Monasca storage driver, which will enable Monasca to consume
OpenStack notifications from other OpenStack services [5]. A monitoring panel
has been developer for Horizon. An integration with Heat for auto-scaling
support is under active development.
Monasca has been running weekly meetings from the start of the project.
Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants
to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections
will be held to elect new PTLs and core members.
Monasca was initially discussed at the Atlanta Summit. The first Monasca
mid-cycle meetup was held in August 2014 at which three companies attended. At
the Paris Summit a session on Monsca was held. In addition, at the Paris
Summit, there as a design summit session held to discuss areas for collaboration
between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on
August 7-8, 2015, and included six companies [9]. Monasca is planning on
holding Monasca specific sessions at the Tokyo Summit as well as joint sessions
with other OpenStack projects. Monasca is interested in developing integrations
with Ceilometer, Heat, Mistral, Congress and others. There have been several
local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San
Francisco, CA.
Monasca has an extensive set of documentation. Overall documentation and links
to documentation are at the Monasca Wiki [2]. The Monasca API is documented
[3]. The optional Monasca Agent is documented [4].
Monasca has several official deployment solutions available. Ansible roles are
available [6]. Puppet modules are available via the openstack organization
[7]. Monasca also has a turn-key development environment based on Vagrant,
Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca
plugin [11] for DevStack. Tempest tests for Monasca [12] are also available.
Monasca is continually deployed to test and production environments off of
master branch and maintains a very high level of quality. The first major
release of Monasca was tagged for Kilo. The second major release of Monasca
will be tagged for Liberty.
[1]: https://review.openstack.org/#/q/status:open+monasca,n,z
[2]: https://wiki.openstack.org/wiki/Monasca
[3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/
monasca-api-spec.md
[4]: https://git.openstack.org/openstack/monasca-agent
[5]: https://git.openstack.org/openstack/monasca-ceilometer
[6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca
[7]: https://git.openstack.org/openstack/puppet-monasca
[8]: https://git.openstack.org/openstack/monasca-vagrant
[9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle
[10]: https://bugs.launchpad.net/monasca
[11]: https://git.openstack.org/openstack/monasca-api/devstack
[12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests
Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
|
|
|
- openstack/monasca-thresh
|
2016-02-01 13:58:26 -07:00
|
|
|
monasca-common:
|
|
|
|
repos:
|
Adding Monasca to OpenStack
Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant
monitoring-as-a-service solution for metrics. Metrics can be published to the
Monasca API, stored and queried. Alarms can be created and notifications, such
as email, can be sent when alarms transition state. Support for complex event
processing and logging is in progress. Monasca builds an extensible platform
for advanced monitoring services that can be used by both operators and tenants
to gain operational insight and visibilty, ensuring availabilty and stability.
All code has been developed under an Apache 2.0 license and has no restrictions
on distribution or deployment. All Monasca code is submitted and reviewed
through OpenStack Gerrit [1]. The Monasca project maintains a core team that
approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10].
Monasca integrates in with several OpenStack projects and services. The Monasca
API uses Keystone for authentication and multi-tenancy. Oslo libraries are used
by all components where applicable. Keystone middleware is used by the Monasca
API. The Monasca project is in the process of integrating with Ceilometer by
using the Ceilometer data collection pipeline as well as the Ceilometer API via
a Ceilometer to Monasca storage driver, which will enable Monasca to consume
OpenStack notifications from other OpenStack services [5]. A monitoring panel
has been developer for Horizon. An integration with Heat for auto-scaling
support is under active development.
Monasca has been running weekly meetings from the start of the project.
Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants
to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections
will be held to elect new PTLs and core members.
Monasca was initially discussed at the Atlanta Summit. The first Monasca
mid-cycle meetup was held in August 2014 at which three companies attended. At
the Paris Summit a session on Monsca was held. In addition, at the Paris
Summit, there as a design summit session held to discuss areas for collaboration
between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on
August 7-8, 2015, and included six companies [9]. Monasca is planning on
holding Monasca specific sessions at the Tokyo Summit as well as joint sessions
with other OpenStack projects. Monasca is interested in developing integrations
with Ceilometer, Heat, Mistral, Congress and others. There have been several
local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San
Francisco, CA.
Monasca has an extensive set of documentation. Overall documentation and links
to documentation are at the Monasca Wiki [2]. The Monasca API is documented
[3]. The optional Monasca Agent is documented [4].
Monasca has several official deployment solutions available. Ansible roles are
available [6]. Puppet modules are available via the openstack organization
[7]. Monasca also has a turn-key development environment based on Vagrant,
Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca
plugin [11] for DevStack. Tempest tests for Monasca [12] are also available.
Monasca is continually deployed to test and production environments off of
master branch and maintains a very high level of quality. The first major
release of Monasca was tagged for Kilo. The second major release of Monasca
will be tagged for Liberty.
[1]: https://review.openstack.org/#/q/status:open+monasca,n,z
[2]: https://wiki.openstack.org/wiki/Monasca
[3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/
monasca-api-spec.md
[4]: https://git.openstack.org/openstack/monasca-agent
[5]: https://git.openstack.org/openstack/monasca-ceilometer
[6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca
[7]: https://git.openstack.org/openstack/puppet-monasca
[8]: https://git.openstack.org/openstack/monasca-vagrant
[9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle
[10]: https://bugs.launchpad.net/monasca
[11]: https://git.openstack.org/openstack/monasca-api/devstack
[12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests
Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
|
|
|
- openstack/monasca-common
|
|
|
|
monasca-ui:
|
|
|
|
repos:
|
|
|
|
- openstack/monasca-ui
|
|
|
|
python-monascaclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-monascaclient
|
|
|
|
monasca-agent:
|
|
|
|
repos:
|
|
|
|
- openstack/monasca-agent
|
|
|
|
monasca-statsd:
|
|
|
|
repos:
|
|
|
|
- openstack/monasca-statsd
|
|
|
|
monasca-ceilometer:
|
2021-03-29 09:08:03 -05:00
|
|
|
deprecated: ussuri
|
|
|
|
release-management: deprecated
|
Adding Monasca to OpenStack
Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant
monitoring-as-a-service solution for metrics. Metrics can be published to the
Monasca API, stored and queried. Alarms can be created and notifications, such
as email, can be sent when alarms transition state. Support for complex event
processing and logging is in progress. Monasca builds an extensible platform
for advanced monitoring services that can be used by both operators and tenants
to gain operational insight and visibilty, ensuring availabilty and stability.
All code has been developed under an Apache 2.0 license and has no restrictions
on distribution or deployment. All Monasca code is submitted and reviewed
through OpenStack Gerrit [1]. The Monasca project maintains a core team that
approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10].
Monasca integrates in with several OpenStack projects and services. The Monasca
API uses Keystone for authentication and multi-tenancy. Oslo libraries are used
by all components where applicable. Keystone middleware is used by the Monasca
API. The Monasca project is in the process of integrating with Ceilometer by
using the Ceilometer data collection pipeline as well as the Ceilometer API via
a Ceilometer to Monasca storage driver, which will enable Monasca to consume
OpenStack notifications from other OpenStack services [5]. A monitoring panel
has been developer for Horizon. An integration with Heat for auto-scaling
support is under active development.
Monasca has been running weekly meetings from the start of the project.
Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants
to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections
will be held to elect new PTLs and core members.
Monasca was initially discussed at the Atlanta Summit. The first Monasca
mid-cycle meetup was held in August 2014 at which three companies attended. At
the Paris Summit a session on Monsca was held. In addition, at the Paris
Summit, there as a design summit session held to discuss areas for collaboration
between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on
August 7-8, 2015, and included six companies [9]. Monasca is planning on
holding Monasca specific sessions at the Tokyo Summit as well as joint sessions
with other OpenStack projects. Monasca is interested in developing integrations
with Ceilometer, Heat, Mistral, Congress and others. There have been several
local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San
Francisco, CA.
Monasca has an extensive set of documentation. Overall documentation and links
to documentation are at the Monasca Wiki [2]. The Monasca API is documented
[3]. The optional Monasca Agent is documented [4].
Monasca has several official deployment solutions available. Ansible roles are
available [6]. Puppet modules are available via the openstack organization
[7]. Monasca also has a turn-key development environment based on Vagrant,
Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca
plugin [11] for DevStack. Tempest tests for Monasca [12] are also available.
Monasca is continually deployed to test and production environments off of
master branch and maintains a very high level of quality. The first major
release of Monasca was tagged for Kilo. The second major release of Monasca
will be tagged for Liberty.
[1]: https://review.openstack.org/#/q/status:open+monasca,n,z
[2]: https://wiki.openstack.org/wiki/Monasca
[3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/
monasca-api-spec.md
[4]: https://git.openstack.org/openstack/monasca-agent
[5]: https://git.openstack.org/openstack/monasca-ceilometer
[6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca
[7]: https://git.openstack.org/openstack/puppet-monasca
[8]: https://git.openstack.org/openstack/monasca-vagrant
[9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle
[10]: https://bugs.launchpad.net/monasca
[11]: https://git.openstack.org/openstack/monasca-api/devstack
[12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests
Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
|
|
|
repos:
|
|
|
|
- openstack/monasca-ceilometer
|
2016-04-04 13:41:40 -06:00
|
|
|
monasca-transform:
|
2021-03-29 09:33:27 -05:00
|
|
|
deprecated: victoria
|
|
|
|
release-management: deprecated
|
2016-04-04 13:41:40 -06:00
|
|
|
repos:
|
|
|
|
- openstack/monasca-transform
|
2016-07-12 14:28:49 -06:00
|
|
|
monasca-grafana-datasource:
|
|
|
|
repos:
|
|
|
|
- openstack/monasca-grafana-datasource
|
2017-01-16 16:15:48 +01:00
|
|
|
monasca-kibana-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/monasca-kibana-plugin
|
2015-09-02 10:47:14 -05:00
|
|
|
neutron:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2023-09-21 14:22:53 +10:00
|
|
|
name: Brian Haley
|
|
|
|
irc: haleyb
|
|
|
|
email: haleyb.dev@gmail.com
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-neutron
|
2015-07-14 12:15:21 -05:00
|
|
|
service: Networking service
|
2014-07-23 10:40:47 -05:00
|
|
|
mission: >
|
2015-07-28 12:11:57 +02:00
|
|
|
To implement services and associated libraries to provide on-demand,
|
|
|
|
scalable, and technology-agnostic network abstraction.
|
|
|
|
url: https://wiki.openstack.org/wiki/Neutron
|
2015-07-16 15:57:12 +02:00
|
|
|
deliverables:
|
2015-11-12 11:17:29 -05:00
|
|
|
networking-bagpipe:
|
|
|
|
repos:
|
|
|
|
- openstack/networking-bagpipe
|
2015-07-28 12:11:57 +02:00
|
|
|
networking-bgpvpn:
|
|
|
|
repos:
|
|
|
|
- openstack/networking-bgpvpn
|
|
|
|
networking-midonet:
|
2021-03-30 08:58:09 +02:00
|
|
|
deprecated: wallaby
|
|
|
|
release-management: deprecated
|
2015-07-28 12:11:57 +02:00
|
|
|
repos:
|
|
|
|
- openstack/networking-midonet
|
|
|
|
networking-odl:
|
2024-09-19 23:17:25 +09:00
|
|
|
deprecated: '2023.2'
|
2023-05-23 20:32:19 +02:00
|
|
|
release-management: deprecated
|
2015-07-28 12:11:57 +02:00
|
|
|
repos:
|
|
|
|
- openstack/networking-odl
|
|
|
|
networking-sfc:
|
|
|
|
repos:
|
|
|
|
- openstack/networking-sfc
|
2016-05-31 11:16:36 -07:00
|
|
|
neutron-fwaas:
|
2015-07-28 12:11:57 +02:00
|
|
|
repos:
|
|
|
|
- openstack/neutron-fwaas
|
2016-05-31 11:16:36 -07:00
|
|
|
neutron:
|
|
|
|
repos:
|
|
|
|
- openstack/neutron
|
|
|
|
neutron-dynamic-routing:
|
|
|
|
repos:
|
|
|
|
- openstack/neutron-dynamic-routing
|
2015-10-30 10:00:37 +09:00
|
|
|
neutron-lib:
|
|
|
|
repos:
|
|
|
|
- openstack/neutron-lib
|
2015-07-28 12:11:57 +02:00
|
|
|
neutron-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-28 12:11:57 +02:00
|
|
|
repos:
|
|
|
|
- openstack/neutron-specs
|
2017-09-09 15:08:54 +05:30
|
|
|
neutron-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/neutron-tempest-plugin
|
2019-12-03 15:17:36 +01:00
|
|
|
ovn-octavia-provider:
|
|
|
|
repos:
|
|
|
|
- openstack/ovn-octavia-provider
|
2017-02-24 16:35:11 -05:00
|
|
|
ovsdbapp:
|
|
|
|
repos:
|
|
|
|
- openstack/ovsdbapp
|
2015-07-28 12:11:57 +02:00
|
|
|
python-neutronclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-neutronclient
|
2017-06-16 22:43:23 +09:00
|
|
|
neutron-fwaas-dashboard:
|
|
|
|
repos:
|
|
|
|
- openstack/neutron-fwaas-dashboard
|
2017-12-11 16:28:24 -06:00
|
|
|
neutron-vpnaas:
|
|
|
|
repos:
|
|
|
|
- openstack/neutron-vpnaas
|
|
|
|
neutron-vpnaas-dashboard:
|
|
|
|
repos:
|
|
|
|
- openstack/neutron-vpnaas-dashboard
|
2018-08-02 18:15:52 +00:00
|
|
|
os-ken:
|
|
|
|
repos:
|
|
|
|
- openstack/os-ken
|
2021-07-29 09:35:48 +02:00
|
|
|
tap-as-a-service:
|
|
|
|
repos:
|
|
|
|
- openstack/tap-as-a-service
|
2023-04-04 14:24:51 +01:00
|
|
|
ovn-bgp-agent:
|
|
|
|
repos:
|
|
|
|
- openstack/ovn-bgp-agent
|
2015-09-02 10:47:14 -05:00
|
|
|
nova:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2021-09-08 20:32:02 +09:00
|
|
|
name: Sylvain Bauza
|
|
|
|
irc: bauzas
|
|
|
|
email: sbauza@redhat.com
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-nova
|
2015-07-14 12:15:21 -05:00
|
|
|
service: Compute service
|
2015-07-28 12:11:57 +02:00
|
|
|
mission: >
|
|
|
|
To implement services and associated libraries to provide massively
|
|
|
|
scalable, on demand, self service access to compute resources, including
|
|
|
|
bare metal, virtual machines, and containers.
|
|
|
|
url: https://wiki.openstack.org/wiki/Nova
|
|
|
|
deliverables:
|
|
|
|
nova:
|
|
|
|
repos:
|
|
|
|
- openstack/nova
|
|
|
|
nova-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/nova-specs
|
|
|
|
python-novaclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-novaclient
|
2015-11-11 13:18:14 +00:00
|
|
|
os-vif:
|
|
|
|
repos:
|
|
|
|
- openstack/os-vif
|
2020-11-02 14:20:45 +01:00
|
|
|
placement:
|
|
|
|
repos:
|
|
|
|
- openstack/placement
|
|
|
|
os-traits:
|
|
|
|
repos:
|
|
|
|
- openstack/os-traits
|
|
|
|
osc-placement:
|
|
|
|
repos:
|
|
|
|
- openstack/osc-placement
|
|
|
|
os-resource-classes:
|
|
|
|
repos:
|
|
|
|
- openstack/os-resource-classes
|
2016-11-17 15:03:15 -07:00
|
|
|
octavia:
|
|
|
|
ptl:
|
2021-03-10 09:32:23 -08:00
|
|
|
name: Gregory Thiemonge
|
|
|
|
irc: gthiemonge
|
|
|
|
email: gthiemon@redhat.com
|
2020-10-14 15:25:49 +02:00
|
|
|
appointed:
|
|
|
|
- wallaby
|
2016-11-17 15:03:15 -07:00
|
|
|
irc-channel: openstack-lbaas
|
2017-06-28 16:32:11 -07:00
|
|
|
service: Load-balancer service
|
2016-11-17 15:03:15 -07:00
|
|
|
mission: >
|
|
|
|
To provide scalable, on demand, self service access to load-balancer
|
|
|
|
services, in technology-agnostic manner.
|
|
|
|
url: https://wiki.openstack.org/wiki/Octavia
|
|
|
|
deliverables:
|
|
|
|
octavia:
|
|
|
|
repos:
|
|
|
|
- openstack/octavia
|
|
|
|
octavia-dashboard:
|
|
|
|
repos:
|
2017-03-01 11:30:51 -08:00
|
|
|
- openstack/octavia-dashboard
|
|
|
|
octavia-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/octavia-tempest-plugin
|
|
|
|
python-octaviaclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-octaviaclient
|
2018-09-24 12:34:18 -07:00
|
|
|
octavia-lib:
|
|
|
|
repos:
|
|
|
|
- openstack/octavia-lib
|
2015-09-17 16:30:34 +01:00
|
|
|
OpenStack Charms:
|
|
|
|
ptl:
|
2024-06-11 09:23:43 +01:00
|
|
|
name: James Page
|
|
|
|
irc: jamespage
|
|
|
|
email: james.page@canonical.com
|
2020-10-14 15:18:51 +02:00
|
|
|
appointed:
|
|
|
|
- wallaby
|
2022-02-27 18:44:11 -06:00
|
|
|
- zed
|
2023-03-10 18:14:20 -06:00
|
|
|
- '2023.1'
|
|
|
|
- '2023.2'
|
2024-06-11 09:23:43 +01:00
|
|
|
- '2024.2'
|
2016-10-11 14:22:51 +02:00
|
|
|
irc-channel: openstack-charms
|
2015-09-17 16:30:34 +01:00
|
|
|
service: Juju Charms for deployment of OpenStack
|
|
|
|
mission: >
|
|
|
|
Develop and maintain Juju Charms for deploying and managing
|
|
|
|
OpenStack services.
|
2017-08-22 19:08:18 +02:00
|
|
|
url: https://docs.openstack.org/charm-guide/latest/
|
2015-09-17 16:30:34 +01:00
|
|
|
deliverables:
|
2016-09-07 08:45:49 -07:00
|
|
|
charms.ceph:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-09-07 08:45:49 -07:00
|
|
|
repos:
|
|
|
|
- openstack/charms.ceph
|
2015-09-17 16:30:34 +01:00
|
|
|
charms.openstack:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charms.openstack
|
2016-07-13 11:47:28 +01:00
|
|
|
charm-aodh:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 11:47:28 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-aodh
|
2016-07-14 19:48:12 +00:00
|
|
|
charm-barbican:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-14 19:48:12 +00:00
|
|
|
repos:
|
|
|
|
- openstack/charm-barbican
|
|
|
|
charm-barbican-softhsm:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-14 19:48:12 +00:00
|
|
|
repos:
|
|
|
|
- openstack/charm-barbican-softhsm
|
2018-11-06 13:36:44 +01:00
|
|
|
charm-barbican-vault:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-barbican-vault
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-ceilometer:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-ceilometer
|
|
|
|
charm-ceilometer-agent:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-ceilometer-agent
|
2021-06-24 14:47:00 +01:00
|
|
|
charm-ceph-dashboard:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-ceph-dashboard
|
2020-08-03 15:10:58 +01:00
|
|
|
charm-ceph-iscsi:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-ceph-iscsi
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-ceph-mon:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-ceph-mon
|
2022-03-28 12:16:09 +02:00
|
|
|
charm-ceph-nfs:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-ceph-nfs
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-ceph-osd:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-ceph-osd
|
2016-10-24 14:57:31 -07:00
|
|
|
charm-ceph-fs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-10-24 14:57:31 -07:00
|
|
|
repos:
|
|
|
|
- openstack/charm-ceph-fs
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-ceph-radosgw:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-ceph-radosgw
|
2019-02-25 14:02:47 +03:00
|
|
|
charm-ceph-rbd-mirror:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-ceph-rbd-mirror
|
2017-03-03 11:41:04 -03:00
|
|
|
charm-ceph-proxy:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2017-03-03 11:41:04 -03:00
|
|
|
repos:
|
|
|
|
- openstack/charm-ceph-proxy
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-cinder:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder
|
|
|
|
charm-cinder-backup:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder-backup
|
2019-11-29 16:25:20 +01:00
|
|
|
charm-cinder-backup-swift-proxy:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder-backup-swift-proxy
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-cinder-ceph:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder-ceph
|
2021-09-02 15:34:10 -03:00
|
|
|
charm-cinder-lvm:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder-lvm
|
2021-09-14 17:50:41 -03:00
|
|
|
charm-cinder-netapp:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder-netapp
|
2023-04-04 18:03:17 +00:00
|
|
|
charm-cinder-nfs:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder-nfs
|
2022-01-13 10:35:08 -04:00
|
|
|
charm-cinder-nimblestorage:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder-nimblestorage
|
2019-09-02 14:48:41 +01:00
|
|
|
charm-cinder-purestorage:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder-purestorage
|
2022-01-07 10:03:32 -04:00
|
|
|
charm-cinder-solidfire:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder-solidfire
|
2022-04-13 20:24:57 -03:00
|
|
|
charm-cinder-three-par:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder-three-par
|
2022-06-21 13:50:09 +09:30
|
|
|
charm-cinder-dell-emc-powerstore:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder-dell-emc-powerstore
|
2022-06-21 14:15:26 +09:30
|
|
|
charm-cinder-ibm-storwize-svc:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder-ibm-storwize-svc
|
2022-11-08 04:59:23 +03:00
|
|
|
charm-cinder-infinidat:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-cinder-infinidat
|
2016-12-06 11:38:34 +01:00
|
|
|
charm-cloudkitty:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-12-06 11:38:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-cloudkitty
|
2017-07-13 15:31:10 +01:00
|
|
|
charm-deployment-guide:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2017-07-13 15:31:10 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-deployment-guide
|
2016-07-13 12:15:45 +01:00
|
|
|
charm-designate:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 12:15:45 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-designate
|
|
|
|
charm-designate-bind:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 12:15:45 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-designate-bind
|
2017-08-02 10:34:21 +01:00
|
|
|
charm-gnocchi:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2017-08-02 10:34:21 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-gnocchi
|
2019-10-02 18:54:16 +00:00
|
|
|
charm-placement:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-placement
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-glance:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-glance
|
2018-05-08 12:55:53 -05:00
|
|
|
charm-glance-simplestreams-sync:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2018-05-08 12:55:53 -05:00
|
|
|
repos:
|
|
|
|
- openstack/charm-glance-simplestreams-sync
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-guide:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-guide
|
|
|
|
charm-hacluster:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-hacluster
|
|
|
|
charm-heat:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-heat
|
2022-11-09 00:14:59 +03:00
|
|
|
charm-infinidat-tools:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-infinidat-tools
|
2018-11-06 13:36:44 +01:00
|
|
|
charm-interface-barbican-secrets:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-barbican-secrets
|
2018-04-24 15:49:20 -07:00
|
|
|
charm-interface-bgp:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2018-04-24 15:49:20 -07:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-bgp
|
2016-07-13 15:00:14 +01:00
|
|
|
charm-interface-bind-rndc:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 15:00:14 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-bind-rndc
|
2017-08-02 10:34:21 +01:00
|
|
|
charm-interface-ceph-client:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2017-08-02 10:34:21 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-ceph-client
|
2017-02-06 11:04:10 -08:00
|
|
|
charm-interface-ceph-mds:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2017-02-06 11:04:10 -08:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-ceph-mds
|
2019-02-25 14:02:47 +03:00
|
|
|
charm-interface-ceph-rbd-mirror:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-ceph-rbd-mirror
|
2019-01-16 15:28:01 +00:00
|
|
|
charm-interface-cinder-backend:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-cinder-backend
|
2019-11-29 16:25:20 +01:00
|
|
|
charm-interface-cinder-backup:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-cinder-backup
|
2018-11-06 13:36:44 +01:00
|
|
|
charm-interface-dashboard-plugin:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-dashboard-plugin
|
2017-11-02 15:52:51 +01:00
|
|
|
charm-interface-designate:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2017-11-02 15:52:51 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-designate
|
2017-08-02 10:34:21 +01:00
|
|
|
charm-interface-gnocchi:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2017-08-02 10:34:21 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-gnocchi
|
2016-07-13 15:00:14 +01:00
|
|
|
charm-interface-hacluster:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 15:00:14 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-hacluster
|
2020-09-24 13:37:08 +00:00
|
|
|
charm-interface-ironic-api:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-ironic-api
|
2016-07-13 15:00:14 +01:00
|
|
|
charm-interface-keystone:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 15:00:14 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-keystone
|
2017-01-16 13:15:01 +00:00
|
|
|
charm-interface-keystone-admin:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2017-01-16 13:15:01 +00:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-keystone-admin
|
2016-07-13 15:00:14 +01:00
|
|
|
charm-interface-keystone-credentials:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 15:00:14 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-keystone-credentials
|
2017-02-09 11:59:16 +00:00
|
|
|
charm-interface-keystone-domain-backend:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2017-02-09 11:59:16 +00:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-keystone-domain-backend
|
2019-04-30 16:39:32 -07:00
|
|
|
charm-interface-keystone-fid-service-provider:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-keystone-fid-service-provider
|
2020-01-21 17:41:41 -03:00
|
|
|
charm-interface-keystone-notifications:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-keystone-notifications
|
2020-11-16 10:57:50 +00:00
|
|
|
charm-interface-magpie:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-magpie
|
2016-10-18 10:23:23 +00:00
|
|
|
charm-interface-manila-plugin:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-10-18 10:23:23 +00:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-manila-plugin
|
2019-10-11 14:39:43 -07:00
|
|
|
charm-interface-mysql-innodb-cluster:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-mysql-innodb-cluster
|
|
|
|
charm-interface-mysql-router:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-mysql-router
|
2016-07-13 15:00:14 +01:00
|
|
|
charm-interface-mysql-shared:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 15:00:14 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-mysql-shared
|
2018-11-06 13:36:44 +01:00
|
|
|
charm-interface-neutron-load-balancer:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-neutron-load-balancer
|
2018-10-09 08:40:43 +01:00
|
|
|
charm-interface-nova-cell:
|
2018-11-28 13:17:36 +01:00
|
|
|
release-management: external
|
2018-10-09 08:40:43 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-nova-cell
|
|
|
|
charm-interface-nova-compute:
|
2018-11-28 13:17:36 +01:00
|
|
|
release-management: external
|
2018-10-09 08:40:43 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-nova-compute
|
2016-07-13 15:00:14 +01:00
|
|
|
charm-interface-neutron-plugin:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 15:00:14 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-neutron-plugin
|
2016-09-19 10:39:01 +00:00
|
|
|
charm-interface-neutron-plugin-api-subordinate:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-09-19 10:39:01 +00:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-neutron-plugin-api-subordinate
|
2016-07-13 15:00:14 +01:00
|
|
|
charm-interface-odl-controller-api:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 15:00:14 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-odl-controller-api
|
|
|
|
charm-interface-openstack-ha:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 15:00:14 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-openstack-ha
|
|
|
|
charm-interface-ovsdb-manager:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 15:00:14 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-ovsdb-manager
|
2019-04-03 13:44:10 +01:00
|
|
|
charm-interface-pacemaker-remote:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-pacemaker-remote
|
2019-10-02 18:54:16 +00:00
|
|
|
charm-interface-placement:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-placement
|
2016-07-13 15:00:14 +01:00
|
|
|
charm-interface-rabbitmq:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 15:00:14 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-rabbitmq
|
2016-08-31 11:37:38 +00:00
|
|
|
charm-interface-service-control:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-08-31 11:37:38 +00:00
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-service-control
|
2019-04-30 16:39:32 -07:00
|
|
|
charm-interface-websso-fid-service-provider:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-interface-websso-fid-service-provider
|
2018-01-07 11:53:32 -05:00
|
|
|
charm-ironic:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2018-01-07 11:53:32 -05:00
|
|
|
repos:
|
|
|
|
- openstack/charm-ironic
|
2020-09-24 13:37:08 +00:00
|
|
|
charm-ironic-api:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-ironic-api
|
|
|
|
charm-ironic-conductor:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-ironic-conductor
|
2023-03-02 16:13:12 -03:00
|
|
|
charm-ironic-dashboard:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-ironic-dashboard
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-keystone:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-keystone
|
2020-07-29 16:07:54 +02:00
|
|
|
charm-keystone-kerberos:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-keystone-kerberos
|
2017-02-09 11:59:16 +00:00
|
|
|
charm-keystone-ldap:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2017-02-09 11:59:16 +00:00
|
|
|
repos:
|
|
|
|
- openstack/charm-keystone-ldap
|
2022-09-13 13:31:38 -03:00
|
|
|
charm-keystone-openidc:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-keystone-openidc
|
2019-04-30 16:39:32 -07:00
|
|
|
charm-keystone-saml-mellon:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-keystone-saml-mellon
|
2019-02-25 14:02:47 +03:00
|
|
|
charm-layer-ceph:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-layer-ceph
|
2017-02-06 11:17:50 -08:00
|
|
|
charm-layer-ceph-base:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2017-02-06 11:17:50 -08:00
|
|
|
repos:
|
|
|
|
- openstack/charm-layer-ceph-base
|
2016-07-13 15:00:14 +01:00
|
|
|
charm-layer-openstack:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 15:00:14 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-layer-openstack
|
|
|
|
charm-layer-openstack-api:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 15:00:14 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-layer-openstack-api
|
|
|
|
charm-layer-openstack-principle:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-07-13 15:00:14 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-layer-openstack-principle
|
2021-03-12 10:53:32 +01:00
|
|
|
charm-magnum:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-magnum
|
|
|
|
charm-magnum-dashboard:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-magnum-dashboard
|
2020-11-16 10:57:50 +00:00
|
|
|
charm-magpie:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-magpie
|
2016-08-25 20:09:40 +00:00
|
|
|
charm-manila:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-08-25 20:09:40 +00:00
|
|
|
repos:
|
|
|
|
- openstack/charm-manila
|
2021-03-16 13:41:41 +01:00
|
|
|
charm-manila-dashboard:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-manila-dashboard
|
2023-09-06 14:11:23 +01:00
|
|
|
charm-manila-flashblade:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-manila-flashblade
|
2019-11-08 09:49:19 +08:00
|
|
|
charm-manila-ganesha:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-manila-ganesha
|
2016-10-18 10:23:23 +00:00
|
|
|
charm-manila-generic:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-10-18 10:23:23 +00:00
|
|
|
repos:
|
|
|
|
- openstack/charm-manila-generic
|
2021-03-11 14:58:18 +01:00
|
|
|
charm-manila-netapp:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-manila-netapp
|
2022-11-09 01:43:26 +03:00
|
|
|
charm-manila-infinidat:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-manila-infinidat
|
2019-04-03 13:44:10 +01:00
|
|
|
charm-masakari:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-masakari
|
|
|
|
charm-masakari-monitors:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-masakari-monitors
|
2019-10-11 14:39:43 -07:00
|
|
|
charm-mysql-innodb-cluster:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-mysql-innodb-cluster
|
|
|
|
charm-mysql-router:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-mysql-router
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-neutron-api:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-neutron-api
|
2020-06-24 12:02:59 +02:00
|
|
|
charm-neutron-api-plugin-arista:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-neutron-api-plugin-arista
|
2020-09-24 13:37:08 +00:00
|
|
|
charm-neutron-api-plugin-ironic:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-neutron-api-plugin-ironic
|
2019-10-10 16:58:34 +02:00
|
|
|
charm-neutron-api-plugin-ovn:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-neutron-api-plugin-ovn
|
2018-04-24 15:49:20 -07:00
|
|
|
charm-neutron-dynamic-routing:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2018-04-24 15:49:20 -07:00
|
|
|
repos:
|
|
|
|
- openstack/charm-neutron-dynamic-routing
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-neutron-gateway:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-neutron-gateway
|
|
|
|
charm-neutron-openvswitch:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-neutron-openvswitch
|
2018-10-09 08:40:43 +01:00
|
|
|
charm-nova-cell-controller:
|
2018-11-28 13:17:36 +01:00
|
|
|
release-management: external
|
2018-10-09 08:40:43 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-nova-cell-controller
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-nova-cloud-controller:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-nova-cloud-controller
|
|
|
|
charm-nova-compute:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-nova-compute
|
2021-11-30 14:17:04 +01:00
|
|
|
charm-nova-compute-nvidia-vgpu:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-nova-compute-nvidia-vgpu
|
2016-10-31 10:52:49 +00:00
|
|
|
charm-nova-compute-proxy:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-10-31 10:52:49 +00:00
|
|
|
repos:
|
|
|
|
- openstack/charm-nova-compute-proxy
|
2018-10-05 16:38:43 +02:00
|
|
|
charm-octavia:
|
2018-11-06 13:36:44 +01:00
|
|
|
release-management: external
|
2018-10-05 16:38:43 +02:00
|
|
|
repos:
|
|
|
|
- openstack/charm-octavia
|
2018-11-06 13:36:44 +01:00
|
|
|
charm-octavia-dashboard:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-octavia-dashboard
|
2019-06-20 15:07:16 +02:00
|
|
|
charm-octavia-diskimage-retrofit:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-octavia-diskimage-retrofit
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-openstack-dashboard:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-openstack-dashboard
|
2021-09-08 09:31:26 +01:00
|
|
|
charm-openstack-loadbalancer:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-openstack-loadbalancer
|
2020-08-03 15:10:58 +01:00
|
|
|
charm-ops-interface-ceph-client:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-ops-interface-ceph-client
|
2021-09-08 09:31:26 +01:00
|
|
|
charm-ops-interface-ceph-iscsi-admin-access:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-ops-interface-ceph-iscsi-admin-access
|
|
|
|
charm-ops-interface-openstack-loadbalancer:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-ops-interface-openstack-loadbalancer
|
2020-08-03 15:10:58 +01:00
|
|
|
charm-ops-interface-tls-certificates:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-ops-interface-tls-certificates
|
|
|
|
charm-ops-openstack:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-ops-openstack
|
2019-04-03 13:44:10 +01:00
|
|
|
charm-pacemaker-remote:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-pacemaker-remote
|
2015-09-17 16:30:34 +01:00
|
|
|
charm-percona-cluster:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-percona-cluster
|
|
|
|
charm-rabbitmq-server:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-rabbitmq-server
|
|
|
|
charm-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-specs
|
|
|
|
charm-swift-proxy:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-swift-proxy
|
|
|
|
charm-swift-storage:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2015-09-17 16:30:34 +01:00
|
|
|
repos:
|
|
|
|
- openstack/charm-swift-storage
|
2016-10-31 10:52:49 +00:00
|
|
|
charm-tempest:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: external
|
2016-10-31 10:52:49 +00:00
|
|
|
repos:
|
|
|
|
- openstack/charm-tempest
|
2020-04-16 17:00:45 +01:00
|
|
|
charm-trilio-data-mover:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-trilio-data-mover
|
|
|
|
charm-trilio-dm-api:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-trilio-dm-api
|
|
|
|
charm-trilio-horizon-plugin:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-trilio-horizon-plugin
|
|
|
|
charm-trilio-wlm:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-trilio-wlm
|
2019-05-23 09:34:09 +02:00
|
|
|
charm-vault:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
2019-05-31 17:52:47 +02:00
|
|
|
- openstack/charm-vault
|
2020-01-11 12:22:01 +02:00
|
|
|
charm-watcher:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-watcher
|
2020-01-17 15:20:36 +02:00
|
|
|
charm-watcher-dashboard:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-watcher-dashboard
|
2022-10-12 09:40:53 +01:00
|
|
|
charm-zuul-jobs:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/charm-zuul-jobs
|
OpenStack-Helm - Helm charts for OpenStack
The mission of OpenStack-Helm[1] is to provide a collection of Helm charts that
simply, resiliently, and flexibly deploy OpenStack and related services
on Kubernetes. It allows the Helm client to be used directly to deploy
OpenStack components, and as such is comarable to OpenStack-Ansible
(providing playbooks for Ansible) or Puppet OpenStack (providing modules
for Puppet). The project leverages native Kubernetes constructs and allows
OpenStack projects to be installed in combinations and configurations per
unique operator use cases.
OpenStack-Helm has a goal to facilitate pushbutton automated deployments, and
rolling upgrades with no impact to tenant workloads. There is a one-to-one
correspondence between OpenStack projects and Helm charts, and charts are fully
independent of one another. This allows for use cases from traditional
OpenStack clouds, to deployment of standalone components, and combinations
in between.
The project provides reasonable, usable configuration by default, and allows
for deployment-time overrides of individual values or a chart's entire values
config. Dependent service endpoints (e.g. the Keystone API) default to
Kubernetes cluster-internal DNS names, but can be overridden to point at
external instances. Multiple, alternate SDN and SDS solutions will be
integrated, and can be configured at deploy-time. The choice of container
images is configurable as well, and can be mixed and matched on a
chart-by-chart basis per operator needs. The project aims to be
production-grade out-of-box, including configuration for highly available
services, resilient storage for the control plane (e.g. Ceph), and security
features such as cluster-internal TLS.
OpenStack-Helm will follow the OpenStack independent release model[2], and will
ensure Helm charts integrate with an OpenStack release once it is released.
The team plans to change to cycle-trailing releases when ready, but will
need to close the gap between Newton (current-state) and master first. In the
meantime, it will model the cycle-trailing branch strategy as it catches up.
The team is interested in feedback on this plan and advice from the TC.
A number of OpenStack deployment projects pave the way for OpenStack-Helm,
including OpenStack Ansible[3], Kolla-Kubernetes[4], OpenStack-Chef[5],
and Puppet OpenStack [6]. OpenStack-Helm occupies a unique space in its
exclusive use of Kubernetes-native constructs during the deployment process,
including Helm packaging and gotpl templating. This tradeoff enables it to be
as lightweight and simple as possible, at the expense of operator choice of
deployment toolsets. The project team is eager to work with other OpenStack
deployment projects to solve common needs and share learnings.
The OpenStack-Helm project and the team follow the Four Opens:
* Open Source: The project source code[7], specifications[8] and
documentation[9] are open source and are developed in the community under the
Apache 2.0 license.
* Open Design: Design decisions are discussed and formalized via
specifications, as well as openly discussed in the IRC chat room[10] and the
team meetings[11]. The team had informal meetings at the Atlanta and
Denver PTGs, as well as a presentation[12], hands-on workshop, and a related
cross-project Forum at the Boston Summit. More of each are planned for Sydney.
* Open Development: All OpenStack-Helm code is reviewed and hosted in
OpenStack Infra[7]. blueprints and bugs are tracked by Launchpad[13].
Quality gates are run to ensure OpenStack deploys and runs with every
patch set, and integration of standard Rally and Tempest tests is underway.
* Open Community: The project team and core reviewer team[14] represent
multiple employers, and are eager to groom new team members. Project
discussions and Q&A are frequently held in the public IRC chat room, and
all are welcome.
1. https://wiki.openstack.org/wiki/Openstack-helm
2. https://releases.openstack.org/reference/release_models.html
3. https://wiki.openstack.org/wiki/OpenStackAnsible
4. https://wiki.openstack.org/wiki/Kolla
5. https://wiki.openstack.org/wiki/Chef
6. https://docs.openstack.org/puppet-openstack-guide/latest/
7. https://git.openstack.org/cgit/openstack/openstack-helm/
8. https://git.openstack.org/cgit/openstack/openstack-helm/tree/doc/source/specs
9. http://openstack-helm.readthedocs.io/
10. http://eavesdrop.openstack.org/irclogs/%23openstack-helm/
11. http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-5/%23openstack-meeting-5.2017-10-03.log.html
12. https://www.openstack.org/videos/search?search=openstack-helm
13. https://launchpad.net/openstack-helm
14. https://review.openstack.org/#/admin/groups/1749,members
Change-Id: I70995640b6b760e7bfd7f1ec407b8004c855203f
2017-10-11 13:49:05 -05:00
|
|
|
OpenStack-Helm:
|
|
|
|
ptl:
|
2023-02-16 11:16:15 +11:00
|
|
|
name: Vladimir Kozhukalov
|
2023-05-30 20:33:32 +03:00
|
|
|
irc: kozhukalov
|
2023-02-16 11:16:15 +11:00
|
|
|
email: kozhukalov@gmail.com
|
OpenStack-Helm - Helm charts for OpenStack
The mission of OpenStack-Helm[1] is to provide a collection of Helm charts that
simply, resiliently, and flexibly deploy OpenStack and related services
on Kubernetes. It allows the Helm client to be used directly to deploy
OpenStack components, and as such is comarable to OpenStack-Ansible
(providing playbooks for Ansible) or Puppet OpenStack (providing modules
for Puppet). The project leverages native Kubernetes constructs and allows
OpenStack projects to be installed in combinations and configurations per
unique operator use cases.
OpenStack-Helm has a goal to facilitate pushbutton automated deployments, and
rolling upgrades with no impact to tenant workloads. There is a one-to-one
correspondence between OpenStack projects and Helm charts, and charts are fully
independent of one another. This allows for use cases from traditional
OpenStack clouds, to deployment of standalone components, and combinations
in between.
The project provides reasonable, usable configuration by default, and allows
for deployment-time overrides of individual values or a chart's entire values
config. Dependent service endpoints (e.g. the Keystone API) default to
Kubernetes cluster-internal DNS names, but can be overridden to point at
external instances. Multiple, alternate SDN and SDS solutions will be
integrated, and can be configured at deploy-time. The choice of container
images is configurable as well, and can be mixed and matched on a
chart-by-chart basis per operator needs. The project aims to be
production-grade out-of-box, including configuration for highly available
services, resilient storage for the control plane (e.g. Ceph), and security
features such as cluster-internal TLS.
OpenStack-Helm will follow the OpenStack independent release model[2], and will
ensure Helm charts integrate with an OpenStack release once it is released.
The team plans to change to cycle-trailing releases when ready, but will
need to close the gap between Newton (current-state) and master first. In the
meantime, it will model the cycle-trailing branch strategy as it catches up.
The team is interested in feedback on this plan and advice from the TC.
A number of OpenStack deployment projects pave the way for OpenStack-Helm,
including OpenStack Ansible[3], Kolla-Kubernetes[4], OpenStack-Chef[5],
and Puppet OpenStack [6]. OpenStack-Helm occupies a unique space in its
exclusive use of Kubernetes-native constructs during the deployment process,
including Helm packaging and gotpl templating. This tradeoff enables it to be
as lightweight and simple as possible, at the expense of operator choice of
deployment toolsets. The project team is eager to work with other OpenStack
deployment projects to solve common needs and share learnings.
The OpenStack-Helm project and the team follow the Four Opens:
* Open Source: The project source code[7], specifications[8] and
documentation[9] are open source and are developed in the community under the
Apache 2.0 license.
* Open Design: Design decisions are discussed and formalized via
specifications, as well as openly discussed in the IRC chat room[10] and the
team meetings[11]. The team had informal meetings at the Atlanta and
Denver PTGs, as well as a presentation[12], hands-on workshop, and a related
cross-project Forum at the Boston Summit. More of each are planned for Sydney.
* Open Development: All OpenStack-Helm code is reviewed and hosted in
OpenStack Infra[7]. blueprints and bugs are tracked by Launchpad[13].
Quality gates are run to ensure OpenStack deploys and runs with every
patch set, and integration of standard Rally and Tempest tests is underway.
* Open Community: The project team and core reviewer team[14] represent
multiple employers, and are eager to groom new team members. Project
discussions and Q&A are frequently held in the public IRC chat room, and
all are welcome.
1. https://wiki.openstack.org/wiki/Openstack-helm
2. https://releases.openstack.org/reference/release_models.html
3. https://wiki.openstack.org/wiki/OpenStackAnsible
4. https://wiki.openstack.org/wiki/Kolla
5. https://wiki.openstack.org/wiki/Chef
6. https://docs.openstack.org/puppet-openstack-guide/latest/
7. https://git.openstack.org/cgit/openstack/openstack-helm/
8. https://git.openstack.org/cgit/openstack/openstack-helm/tree/doc/source/specs
9. http://openstack-helm.readthedocs.io/
10. http://eavesdrop.openstack.org/irclogs/%23openstack-helm/
11. http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-5/%23openstack-meeting-5.2017-10-03.log.html
12. https://www.openstack.org/videos/search?search=openstack-helm
13. https://launchpad.net/openstack-helm
14. https://review.openstack.org/#/admin/groups/1749,members
Change-Id: I70995640b6b760e7bfd7f1ec407b8004c855203f
2017-10-11 13:49:05 -05:00
|
|
|
service: Helm charts for OpenStack services
|
|
|
|
irc-channel: openstack-helm
|
|
|
|
mission: >
|
|
|
|
To provide a collection of Helm charts that simply, resiliently,
|
|
|
|
and flexibly deploy OpenStack and related services on Kubernetes.
|
2020-04-10 10:04:18 +02:00
|
|
|
OpenStack-Helm also produces lighweight OCI container images agnostic of
|
|
|
|
the deployment tooling for OpenStack.
|
OpenStack-Helm - Helm charts for OpenStack
The mission of OpenStack-Helm[1] is to provide a collection of Helm charts that
simply, resiliently, and flexibly deploy OpenStack and related services
on Kubernetes. It allows the Helm client to be used directly to deploy
OpenStack components, and as such is comarable to OpenStack-Ansible
(providing playbooks for Ansible) or Puppet OpenStack (providing modules
for Puppet). The project leverages native Kubernetes constructs and allows
OpenStack projects to be installed in combinations and configurations per
unique operator use cases.
OpenStack-Helm has a goal to facilitate pushbutton automated deployments, and
rolling upgrades with no impact to tenant workloads. There is a one-to-one
correspondence between OpenStack projects and Helm charts, and charts are fully
independent of one another. This allows for use cases from traditional
OpenStack clouds, to deployment of standalone components, and combinations
in between.
The project provides reasonable, usable configuration by default, and allows
for deployment-time overrides of individual values or a chart's entire values
config. Dependent service endpoints (e.g. the Keystone API) default to
Kubernetes cluster-internal DNS names, but can be overridden to point at
external instances. Multiple, alternate SDN and SDS solutions will be
integrated, and can be configured at deploy-time. The choice of container
images is configurable as well, and can be mixed and matched on a
chart-by-chart basis per operator needs. The project aims to be
production-grade out-of-box, including configuration for highly available
services, resilient storage for the control plane (e.g. Ceph), and security
features such as cluster-internal TLS.
OpenStack-Helm will follow the OpenStack independent release model[2], and will
ensure Helm charts integrate with an OpenStack release once it is released.
The team plans to change to cycle-trailing releases when ready, but will
need to close the gap between Newton (current-state) and master first. In the
meantime, it will model the cycle-trailing branch strategy as it catches up.
The team is interested in feedback on this plan and advice from the TC.
A number of OpenStack deployment projects pave the way for OpenStack-Helm,
including OpenStack Ansible[3], Kolla-Kubernetes[4], OpenStack-Chef[5],
and Puppet OpenStack [6]. OpenStack-Helm occupies a unique space in its
exclusive use of Kubernetes-native constructs during the deployment process,
including Helm packaging and gotpl templating. This tradeoff enables it to be
as lightweight and simple as possible, at the expense of operator choice of
deployment toolsets. The project team is eager to work with other OpenStack
deployment projects to solve common needs and share learnings.
The OpenStack-Helm project and the team follow the Four Opens:
* Open Source: The project source code[7], specifications[8] and
documentation[9] are open source and are developed in the community under the
Apache 2.0 license.
* Open Design: Design decisions are discussed and formalized via
specifications, as well as openly discussed in the IRC chat room[10] and the
team meetings[11]. The team had informal meetings at the Atlanta and
Denver PTGs, as well as a presentation[12], hands-on workshop, and a related
cross-project Forum at the Boston Summit. More of each are planned for Sydney.
* Open Development: All OpenStack-Helm code is reviewed and hosted in
OpenStack Infra[7]. blueprints and bugs are tracked by Launchpad[13].
Quality gates are run to ensure OpenStack deploys and runs with every
patch set, and integration of standard Rally and Tempest tests is underway.
* Open Community: The project team and core reviewer team[14] represent
multiple employers, and are eager to groom new team members. Project
discussions and Q&A are frequently held in the public IRC chat room, and
all are welcome.
1. https://wiki.openstack.org/wiki/Openstack-helm
2. https://releases.openstack.org/reference/release_models.html
3. https://wiki.openstack.org/wiki/OpenStackAnsible
4. https://wiki.openstack.org/wiki/Kolla
5. https://wiki.openstack.org/wiki/Chef
6. https://docs.openstack.org/puppet-openstack-guide/latest/
7. https://git.openstack.org/cgit/openstack/openstack-helm/
8. https://git.openstack.org/cgit/openstack/openstack-helm/tree/doc/source/specs
9. http://openstack-helm.readthedocs.io/
10. http://eavesdrop.openstack.org/irclogs/%23openstack-helm/
11. http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-5/%23openstack-meeting-5.2017-10-03.log.html
12. https://www.openstack.org/videos/search?search=openstack-helm
13. https://launchpad.net/openstack-helm
14. https://review.openstack.org/#/admin/groups/1749,members
Change-Id: I70995640b6b760e7bfd7f1ec407b8004c855203f
2017-10-11 13:49:05 -05:00
|
|
|
url: https://wiki.openstack.org/wiki/Openstack-helm
|
|
|
|
deliverables:
|
|
|
|
openstack-helm:
|
2020-02-04 18:34:43 +01:00
|
|
|
release-management: external
|
OpenStack-Helm - Helm charts for OpenStack
The mission of OpenStack-Helm[1] is to provide a collection of Helm charts that
simply, resiliently, and flexibly deploy OpenStack and related services
on Kubernetes. It allows the Helm client to be used directly to deploy
OpenStack components, and as such is comarable to OpenStack-Ansible
(providing playbooks for Ansible) or Puppet OpenStack (providing modules
for Puppet). The project leverages native Kubernetes constructs and allows
OpenStack projects to be installed in combinations and configurations per
unique operator use cases.
OpenStack-Helm has a goal to facilitate pushbutton automated deployments, and
rolling upgrades with no impact to tenant workloads. There is a one-to-one
correspondence between OpenStack projects and Helm charts, and charts are fully
independent of one another. This allows for use cases from traditional
OpenStack clouds, to deployment of standalone components, and combinations
in between.
The project provides reasonable, usable configuration by default, and allows
for deployment-time overrides of individual values or a chart's entire values
config. Dependent service endpoints (e.g. the Keystone API) default to
Kubernetes cluster-internal DNS names, but can be overridden to point at
external instances. Multiple, alternate SDN and SDS solutions will be
integrated, and can be configured at deploy-time. The choice of container
images is configurable as well, and can be mixed and matched on a
chart-by-chart basis per operator needs. The project aims to be
production-grade out-of-box, including configuration for highly available
services, resilient storage for the control plane (e.g. Ceph), and security
features such as cluster-internal TLS.
OpenStack-Helm will follow the OpenStack independent release model[2], and will
ensure Helm charts integrate with an OpenStack release once it is released.
The team plans to change to cycle-trailing releases when ready, but will
need to close the gap between Newton (current-state) and master first. In the
meantime, it will model the cycle-trailing branch strategy as it catches up.
The team is interested in feedback on this plan and advice from the TC.
A number of OpenStack deployment projects pave the way for OpenStack-Helm,
including OpenStack Ansible[3], Kolla-Kubernetes[4], OpenStack-Chef[5],
and Puppet OpenStack [6]. OpenStack-Helm occupies a unique space in its
exclusive use of Kubernetes-native constructs during the deployment process,
including Helm packaging and gotpl templating. This tradeoff enables it to be
as lightweight and simple as possible, at the expense of operator choice of
deployment toolsets. The project team is eager to work with other OpenStack
deployment projects to solve common needs and share learnings.
The OpenStack-Helm project and the team follow the Four Opens:
* Open Source: The project source code[7], specifications[8] and
documentation[9] are open source and are developed in the community under the
Apache 2.0 license.
* Open Design: Design decisions are discussed and formalized via
specifications, as well as openly discussed in the IRC chat room[10] and the
team meetings[11]. The team had informal meetings at the Atlanta and
Denver PTGs, as well as a presentation[12], hands-on workshop, and a related
cross-project Forum at the Boston Summit. More of each are planned for Sydney.
* Open Development: All OpenStack-Helm code is reviewed and hosted in
OpenStack Infra[7]. blueprints and bugs are tracked by Launchpad[13].
Quality gates are run to ensure OpenStack deploys and runs with every
patch set, and integration of standard Rally and Tempest tests is underway.
* Open Community: The project team and core reviewer team[14] represent
multiple employers, and are eager to groom new team members. Project
discussions and Q&A are frequently held in the public IRC chat room, and
all are welcome.
1. https://wiki.openstack.org/wiki/Openstack-helm
2. https://releases.openstack.org/reference/release_models.html
3. https://wiki.openstack.org/wiki/OpenStackAnsible
4. https://wiki.openstack.org/wiki/Kolla
5. https://wiki.openstack.org/wiki/Chef
6. https://docs.openstack.org/puppet-openstack-guide/latest/
7. https://git.openstack.org/cgit/openstack/openstack-helm/
8. https://git.openstack.org/cgit/openstack/openstack-helm/tree/doc/source/specs
9. http://openstack-helm.readthedocs.io/
10. http://eavesdrop.openstack.org/irclogs/%23openstack-helm/
11. http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-5/%23openstack-meeting-5.2017-10-03.log.html
12. https://www.openstack.org/videos/search?search=openstack-helm
13. https://launchpad.net/openstack-helm
14. https://review.openstack.org/#/admin/groups/1749,members
Change-Id: I70995640b6b760e7bfd7f1ec407b8004c855203f
2017-10-11 13:49:05 -05:00
|
|
|
repos:
|
|
|
|
- openstack/openstack-helm
|
2018-10-19 09:53:15 -05:00
|
|
|
openstack-helm-images:
|
2020-02-04 18:34:43 +01:00
|
|
|
release-management: external
|
2018-10-19 09:53:15 -05:00
|
|
|
repos:
|
|
|
|
- openstack/openstack-helm-images
|
OpenStack-Helm - Helm charts for OpenStack
The mission of OpenStack-Helm[1] is to provide a collection of Helm charts that
simply, resiliently, and flexibly deploy OpenStack and related services
on Kubernetes. It allows the Helm client to be used directly to deploy
OpenStack components, and as such is comarable to OpenStack-Ansible
(providing playbooks for Ansible) or Puppet OpenStack (providing modules
for Puppet). The project leverages native Kubernetes constructs and allows
OpenStack projects to be installed in combinations and configurations per
unique operator use cases.
OpenStack-Helm has a goal to facilitate pushbutton automated deployments, and
rolling upgrades with no impact to tenant workloads. There is a one-to-one
correspondence between OpenStack projects and Helm charts, and charts are fully
independent of one another. This allows for use cases from traditional
OpenStack clouds, to deployment of standalone components, and combinations
in between.
The project provides reasonable, usable configuration by default, and allows
for deployment-time overrides of individual values or a chart's entire values
config. Dependent service endpoints (e.g. the Keystone API) default to
Kubernetes cluster-internal DNS names, but can be overridden to point at
external instances. Multiple, alternate SDN and SDS solutions will be
integrated, and can be configured at deploy-time. The choice of container
images is configurable as well, and can be mixed and matched on a
chart-by-chart basis per operator needs. The project aims to be
production-grade out-of-box, including configuration for highly available
services, resilient storage for the control plane (e.g. Ceph), and security
features such as cluster-internal TLS.
OpenStack-Helm will follow the OpenStack independent release model[2], and will
ensure Helm charts integrate with an OpenStack release once it is released.
The team plans to change to cycle-trailing releases when ready, but will
need to close the gap between Newton (current-state) and master first. In the
meantime, it will model the cycle-trailing branch strategy as it catches up.
The team is interested in feedback on this plan and advice from the TC.
A number of OpenStack deployment projects pave the way for OpenStack-Helm,
including OpenStack Ansible[3], Kolla-Kubernetes[4], OpenStack-Chef[5],
and Puppet OpenStack [6]. OpenStack-Helm occupies a unique space in its
exclusive use of Kubernetes-native constructs during the deployment process,
including Helm packaging and gotpl templating. This tradeoff enables it to be
as lightweight and simple as possible, at the expense of operator choice of
deployment toolsets. The project team is eager to work with other OpenStack
deployment projects to solve common needs and share learnings.
The OpenStack-Helm project and the team follow the Four Opens:
* Open Source: The project source code[7], specifications[8] and
documentation[9] are open source and are developed in the community under the
Apache 2.0 license.
* Open Design: Design decisions are discussed and formalized via
specifications, as well as openly discussed in the IRC chat room[10] and the
team meetings[11]. The team had informal meetings at the Atlanta and
Denver PTGs, as well as a presentation[12], hands-on workshop, and a related
cross-project Forum at the Boston Summit. More of each are planned for Sydney.
* Open Development: All OpenStack-Helm code is reviewed and hosted in
OpenStack Infra[7]. blueprints and bugs are tracked by Launchpad[13].
Quality gates are run to ensure OpenStack deploys and runs with every
patch set, and integration of standard Rally and Tempest tests is underway.
* Open Community: The project team and core reviewer team[14] represent
multiple employers, and are eager to groom new team members. Project
discussions and Q&A are frequently held in the public IRC chat room, and
all are welcome.
1. https://wiki.openstack.org/wiki/Openstack-helm
2. https://releases.openstack.org/reference/release_models.html
3. https://wiki.openstack.org/wiki/OpenStackAnsible
4. https://wiki.openstack.org/wiki/Kolla
5. https://wiki.openstack.org/wiki/Chef
6. https://docs.openstack.org/puppet-openstack-guide/latest/
7. https://git.openstack.org/cgit/openstack/openstack-helm/
8. https://git.openstack.org/cgit/openstack/openstack-helm/tree/doc/source/specs
9. http://openstack-helm.readthedocs.io/
10. http://eavesdrop.openstack.org/irclogs/%23openstack-helm/
11. http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-5/%23openstack-meeting-5.2017-10-03.log.html
12. https://www.openstack.org/videos/search?search=openstack-helm
13. https://launchpad.net/openstack-helm
14. https://review.openstack.org/#/admin/groups/1749,members
Change-Id: I70995640b6b760e7bfd7f1ec407b8004c855203f
2017-10-11 13:49:05 -05:00
|
|
|
openstack-helm-infra:
|
2020-02-04 18:34:43 +01:00
|
|
|
release-management: external
|
OpenStack-Helm - Helm charts for OpenStack
The mission of OpenStack-Helm[1] is to provide a collection of Helm charts that
simply, resiliently, and flexibly deploy OpenStack and related services
on Kubernetes. It allows the Helm client to be used directly to deploy
OpenStack components, and as such is comarable to OpenStack-Ansible
(providing playbooks for Ansible) or Puppet OpenStack (providing modules
for Puppet). The project leverages native Kubernetes constructs and allows
OpenStack projects to be installed in combinations and configurations per
unique operator use cases.
OpenStack-Helm has a goal to facilitate pushbutton automated deployments, and
rolling upgrades with no impact to tenant workloads. There is a one-to-one
correspondence between OpenStack projects and Helm charts, and charts are fully
independent of one another. This allows for use cases from traditional
OpenStack clouds, to deployment of standalone components, and combinations
in between.
The project provides reasonable, usable configuration by default, and allows
for deployment-time overrides of individual values or a chart's entire values
config. Dependent service endpoints (e.g. the Keystone API) default to
Kubernetes cluster-internal DNS names, but can be overridden to point at
external instances. Multiple, alternate SDN and SDS solutions will be
integrated, and can be configured at deploy-time. The choice of container
images is configurable as well, and can be mixed and matched on a
chart-by-chart basis per operator needs. The project aims to be
production-grade out-of-box, including configuration for highly available
services, resilient storage for the control plane (e.g. Ceph), and security
features such as cluster-internal TLS.
OpenStack-Helm will follow the OpenStack independent release model[2], and will
ensure Helm charts integrate with an OpenStack release once it is released.
The team plans to change to cycle-trailing releases when ready, but will
need to close the gap between Newton (current-state) and master first. In the
meantime, it will model the cycle-trailing branch strategy as it catches up.
The team is interested in feedback on this plan and advice from the TC.
A number of OpenStack deployment projects pave the way for OpenStack-Helm,
including OpenStack Ansible[3], Kolla-Kubernetes[4], OpenStack-Chef[5],
and Puppet OpenStack [6]. OpenStack-Helm occupies a unique space in its
exclusive use of Kubernetes-native constructs during the deployment process,
including Helm packaging and gotpl templating. This tradeoff enables it to be
as lightweight and simple as possible, at the expense of operator choice of
deployment toolsets. The project team is eager to work with other OpenStack
deployment projects to solve common needs and share learnings.
The OpenStack-Helm project and the team follow the Four Opens:
* Open Source: The project source code[7], specifications[8] and
documentation[9] are open source and are developed in the community under the
Apache 2.0 license.
* Open Design: Design decisions are discussed and formalized via
specifications, as well as openly discussed in the IRC chat room[10] and the
team meetings[11]. The team had informal meetings at the Atlanta and
Denver PTGs, as well as a presentation[12], hands-on workshop, and a related
cross-project Forum at the Boston Summit. More of each are planned for Sydney.
* Open Development: All OpenStack-Helm code is reviewed and hosted in
OpenStack Infra[7]. blueprints and bugs are tracked by Launchpad[13].
Quality gates are run to ensure OpenStack deploys and runs with every
patch set, and integration of standard Rally and Tempest tests is underway.
* Open Community: The project team and core reviewer team[14] represent
multiple employers, and are eager to groom new team members. Project
discussions and Q&A are frequently held in the public IRC chat room, and
all are welcome.
1. https://wiki.openstack.org/wiki/Openstack-helm
2. https://releases.openstack.org/reference/release_models.html
3. https://wiki.openstack.org/wiki/OpenStackAnsible
4. https://wiki.openstack.org/wiki/Kolla
5. https://wiki.openstack.org/wiki/Chef
6. https://docs.openstack.org/puppet-openstack-guide/latest/
7. https://git.openstack.org/cgit/openstack/openstack-helm/
8. https://git.openstack.org/cgit/openstack/openstack-helm/tree/doc/source/specs
9. http://openstack-helm.readthedocs.io/
10. http://eavesdrop.openstack.org/irclogs/%23openstack-helm/
11. http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-5/%23openstack-meeting-5.2017-10-03.log.html
12. https://www.openstack.org/videos/search?search=openstack-helm
13. https://launchpad.net/openstack-helm
14. https://review.openstack.org/#/admin/groups/1749,members
Change-Id: I70995640b6b760e7bfd7f1ec407b8004c855203f
2017-10-11 13:49:05 -05:00
|
|
|
repos:
|
|
|
|
- openstack/openstack-helm-infra
|
2024-04-16 12:30:55 -05:00
|
|
|
openstack-helm-plugin:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
|
|
|
- openstack/openstack-helm-plugin
|
2020-04-10 10:04:18 +02:00
|
|
|
loci:
|
|
|
|
release-management: none
|
|
|
|
repos:
|
|
|
|
- openstack/loci
|
2015-07-28 12:11:57 +02:00
|
|
|
OpenStackAnsible:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2020-10-14 00:43:41 +00:00
|
|
|
name: Dmitriy Rabotyagov
|
|
|
|
irc: noonedeadpunk
|
2022-04-27 08:18:42 +02:00
|
|
|
email: noonedeadpunk@gmail.com
|
2015-07-14 12:15:21 -05:00
|
|
|
service: Ansible playbooks and roles for deployment
|
2015-12-14 20:08:43 +01:00
|
|
|
irc-channel: openstack-ansible
|
2015-07-28 12:11:57 +02:00
|
|
|
mission: >
|
|
|
|
Deploying OpenStack from source in a way that makes it scalable
|
|
|
|
while also being simple to operate, upgrade, and grow.
|
|
|
|
url: https://wiki.openstack.org/wiki/OpenStackAnsible
|
|
|
|
deliverables:
|
2018-02-21 16:14:08 +00:00
|
|
|
ansible-config_template:
|
|
|
|
repos:
|
|
|
|
- openstack/ansible-config_template
|
2018-08-07 14:14:46 +02:00
|
|
|
openstack-ansible:
|
2018-06-05 22:26:19 +02:00
|
|
|
repos:
|
2018-08-07 14:14:46 +02:00
|
|
|
- openstack/openstack-ansible
|
|
|
|
openstack-ansible-roles:
|
2018-03-15 13:05:51 +00:00
|
|
|
repos:
|
2024-02-23 18:40:39 +01:00
|
|
|
- openstack/ansible-role-frrouting
|
2024-11-19 18:57:27 +01:00
|
|
|
- openstack/ansible-role-httpd
|
2018-08-07 14:14:46 +02:00
|
|
|
- openstack/ansible-role-qdrouterd
|
2021-02-01 14:55:56 +02:00
|
|
|
- openstack/ansible-role-pki
|
2021-11-09 17:54:15 +02:00
|
|
|
- openstack/ansible-role-proxysql
|
2018-03-15 13:05:51 +00:00
|
|
|
- openstack/ansible-role-python_venv_build
|
2018-03-14 10:12:23 +00:00
|
|
|
- openstack/ansible-role-systemd_mount
|
|
|
|
- openstack/ansible-role-systemd_networkd
|
|
|
|
- openstack/ansible-role-systemd_service
|
2019-08-13 16:38:03 +03:00
|
|
|
- openstack/ansible-role-uwsgi
|
2021-07-07 16:24:18 +03:00
|
|
|
- openstack/ansible-role-vault
|
2022-11-01 17:46:53 +01:00
|
|
|
- openstack/ansible-role-zookeeper
|
2017-09-29 10:08:10 +00:00
|
|
|
- openstack/ansible-hardening
|
2015-11-11 09:46:16 -06:00
|
|
|
- openstack/openstack-ansible-apt_package_pinning
|
2016-08-21 16:27:54 +01:00
|
|
|
- openstack/openstack-ansible-ceph_client
|
2015-12-09 10:50:34 -06:00
|
|
|
- openstack/openstack-ansible-galera_server
|
2016-08-21 16:27:54 +01:00
|
|
|
- openstack/openstack-ansible-haproxy_server
|
2015-11-11 09:46:16 -06:00
|
|
|
- openstack/openstack-ansible-lxc_container_create
|
|
|
|
- openstack/openstack-ansible-lxc_hosts
|
2015-12-09 10:50:34 -06:00
|
|
|
- openstack/openstack-ansible-memcached_server
|
2015-11-11 09:46:16 -06:00
|
|
|
- openstack/openstack-ansible-openstack_hosts
|
2016-02-25 08:51:23 +00:00
|
|
|
- openstack/openstack-ansible-openstack_openrc
|
2018-08-07 14:14:46 +02:00
|
|
|
- openstack/openstack-ansible-ops
|
2020-06-17 12:26:21 +03:00
|
|
|
- openstack/openstack-ansible-os_adjutant
|
2016-06-07 15:41:58 +01:00
|
|
|
- openstack/openstack-ansible-os_aodh
|
2016-11-16 10:20:01 +00:00
|
|
|
- openstack/openstack-ansible-os_barbican
|
2018-08-07 14:14:46 +02:00
|
|
|
- openstack/openstack-ansible-os_blazar
|
2016-06-07 15:41:58 +01:00
|
|
|
- openstack/openstack-ansible-os_ceilometer
|
|
|
|
- openstack/openstack-ansible-os_cinder
|
2018-08-07 14:14:46 +02:00
|
|
|
- openstack/openstack-ansible-os_cloudkitty
|
2016-11-16 10:20:01 +00:00
|
|
|
- openstack/openstack-ansible-os_designate
|
2016-06-07 15:41:58 +01:00
|
|
|
- openstack/openstack-ansible-os_glance
|
2016-08-16 16:43:53 +01:00
|
|
|
- openstack/openstack-ansible-os_gnocchi
|
2016-06-07 15:41:58 +01:00
|
|
|
- openstack/openstack-ansible-os_heat
|
|
|
|
- openstack/openstack-ansible-os_horizon
|
|
|
|
- openstack/openstack-ansible-os_ironic
|
|
|
|
- openstack/openstack-ansible-os_keystone
|
2016-08-16 16:43:53 +01:00
|
|
|
- openstack/openstack-ansible-os_magnum
|
2018-10-06 07:52:49 -04:00
|
|
|
- openstack/openstack-ansible-os_manila
|
2018-08-07 14:14:46 +02:00
|
|
|
- openstack/openstack-ansible-os_masakari
|
2019-02-04 16:40:02 -05:00
|
|
|
- openstack/openstack-ansible-os_mistral
|
2020-12-07 14:32:34 +02:00
|
|
|
- openstack/openstack-ansible-os_monasca
|
|
|
|
- openstack/openstack-ansible-os_monasca-agent
|
2016-06-07 15:41:58 +01:00
|
|
|
- openstack/openstack-ansible-os_neutron
|
|
|
|
- openstack/openstack-ansible-os_nova
|
2018-02-21 11:21:40 +00:00
|
|
|
- openstack/openstack-ansible-os_octavia
|
2018-11-02 13:32:58 +00:00
|
|
|
- openstack/openstack-ansible-os_placement
|
2016-08-30 15:07:37 +01:00
|
|
|
- openstack/openstack-ansible-os_rally
|
2022-11-01 18:14:28 +01:00
|
|
|
- openstack/openstack-ansible-os_skyline
|
2016-06-07 15:41:58 +01:00
|
|
|
- openstack/openstack-ansible-os_swift
|
2018-02-21 11:21:40 +00:00
|
|
|
- openstack/openstack-ansible-os_tacker
|
2016-06-07 15:41:58 +01:00
|
|
|
- openstack/openstack-ansible-os_tempest
|
2016-10-20 13:00:06 +01:00
|
|
|
- openstack/openstack-ansible-os_trove
|
2018-08-07 14:14:46 +02:00
|
|
|
- openstack/openstack-ansible-os_zun
|
2016-06-07 15:41:58 +01:00
|
|
|
- openstack/openstack-ansible-plugins
|
|
|
|
- openstack/openstack-ansible-rabbitmq_server
|
|
|
|
- openstack/openstack-ansible-repo_server
|
2018-08-07 14:14:46 +02:00
|
|
|
- openstack/openstack-ansible-tests
|
2021-06-23 20:42:08 +03:00
|
|
|
openstack-ansible-nspawn_containers:
|
|
|
|
deprecated: wallaby
|
|
|
|
release-management: deprecated
|
|
|
|
repos:
|
|
|
|
- openstack/openstack-ansible-nspawn_container_create
|
|
|
|
- openstack/openstack-ansible-nspawn_hosts
|
2020-12-07 12:58:42 +02:00
|
|
|
openstack-ansible-galera_client:
|
|
|
|
deprecated: victoria
|
|
|
|
release-management: deprecated
|
|
|
|
repos:
|
|
|
|
- openstack/openstack-ansible-galera_client
|
2020-07-22 23:13:55 +03:00
|
|
|
openstack-ansible-os_congress:
|
|
|
|
deprecated: victoria
|
|
|
|
release-management: deprecated
|
|
|
|
repos:
|
|
|
|
- openstack/openstack-ansible-os_congress
|
2017-04-12 15:59:10 +01:00
|
|
|
openstack-ansible-os_karbor:
|
2018-12-12 15:05:32 +01:00
|
|
|
release-management: deprecated
|
2017-04-12 15:59:10 +01:00
|
|
|
repos:
|
|
|
|
- openstack/openstack-ansible-os_karbor
|
2021-07-07 15:53:13 +03:00
|
|
|
openstack-ansible-os_panko:
|
|
|
|
deprecated: Xena
|
|
|
|
release-management: deprecated
|
|
|
|
repos:
|
|
|
|
- openstack/openstack-ansible-os_panko
|
2022-10-31 16:24:56 +01:00
|
|
|
openstack-ansible-rsyslog:
|
|
|
|
deprecated: Zed
|
|
|
|
release-management: deprecated
|
|
|
|
repos:
|
|
|
|
- openstack/openstack-ansible-rsyslog_server
|
|
|
|
- openstack/openstack-ansible-rsyslog_client
|
2015-11-11 09:46:16 -06:00
|
|
|
openstack-ansible-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-11-11 09:46:16 -06:00
|
|
|
repos:
|
|
|
|
- openstack/openstack-ansible-specs
|
2017-11-28 13:59:33 -06:00
|
|
|
OpenStackSDK:
|
|
|
|
ptl:
|
2020-10-14 00:43:41 +00:00
|
|
|
name: Artem Goncharov
|
|
|
|
irc: gtema
|
|
|
|
email: artem.goncharov@gmail.com
|
2019-09-16 10:25:43 +02:00
|
|
|
appointed:
|
|
|
|
- ussuri
|
2017-11-28 13:59:33 -06:00
|
|
|
irc-channel: openstack-sdks
|
2020-03-04 09:29:17 -06:00
|
|
|
service: Multi-cloud Python SDK and CLI for End Users
|
2017-11-28 13:59:33 -06:00
|
|
|
mission: >
|
2024-02-29 09:32:11 +01:00
|
|
|
To provide a unified multi-cloud aware SDK and CLI for the OpenStack
|
2020-03-04 09:29:17 -06:00
|
|
|
REST API exposing both the full set of low-level APIs as well as curated higher
|
2024-02-29 09:32:11 +01:00
|
|
|
level business logic. Ensure end user is having necessary tools to work
|
|
|
|
with OpenStack based clouds.
|
2018-03-23 08:47:16 -05:00
|
|
|
url: https://docs.openstack.org/openstacksdk/latest/
|
2017-11-28 13:59:33 -06:00
|
|
|
deliverables:
|
2020-03-04 09:29:17 -06:00
|
|
|
cliff:
|
|
|
|
repos:
|
|
|
|
- openstack/cliff
|
2024-02-29 09:32:11 +01:00
|
|
|
codegenerator:
|
|
|
|
repos:
|
|
|
|
- openstack/codegenerator
|
|
|
|
openapi:
|
|
|
|
repos:
|
|
|
|
- openstack/openapi
|
2020-03-04 09:29:17 -06:00
|
|
|
openstackclient:
|
|
|
|
repos:
|
|
|
|
- openstack/openstackclient
|
2018-03-23 08:47:16 -05:00
|
|
|
openstacksdk:
|
2017-11-28 13:59:33 -06:00
|
|
|
repos:
|
2018-03-23 08:47:16 -05:00
|
|
|
- openstack/openstacksdk
|
2017-11-30 10:56:33 -06:00
|
|
|
os-client-config:
|
|
|
|
repos:
|
|
|
|
- openstack/os-client-config
|
2017-11-28 13:59:33 -06:00
|
|
|
os-service-types:
|
|
|
|
repos:
|
|
|
|
- openstack/os-service-types
|
2020-03-04 09:29:17 -06:00
|
|
|
osc-lib:
|
|
|
|
repos:
|
|
|
|
- openstack/osc-lib
|
|
|
|
python-openstackclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-openstackclient
|
2017-11-28 13:59:33 -06:00
|
|
|
requestsexceptions:
|
|
|
|
repos:
|
2019-04-20 20:48:07 +00:00
|
|
|
- openstack/requestsexceptions
|
2017-11-28 13:59:33 -06:00
|
|
|
shade:
|
|
|
|
repos:
|
2019-04-20 20:48:07 +00:00
|
|
|
- openstack/shade
|
2015-09-02 10:47:14 -05:00
|
|
|
oslo:
|
2025-01-16 11:26:09 -08:00
|
|
|
ptl:
|
|
|
|
name: APPOINTMENT NEEDED
|
|
|
|
irc: No nick supplied
|
|
|
|
email: example@example.org
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-oslo
|
|
|
|
service: Common libraries
|
|
|
|
mission: >
|
|
|
|
To produce a set of python libraries containing code shared by OpenStack
|
|
|
|
projects. The APIs provided by these libraries should be high quality,
|
|
|
|
stable, consistent, documented and generally applicable.
|
|
|
|
url: https://wiki.openstack.org/wiki/Oslo
|
|
|
|
deliverables:
|
|
|
|
automaton:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/automaton
|
2017-03-23 09:18:10 -04:00
|
|
|
castellan:
|
|
|
|
repos:
|
|
|
|
- openstack/castellan
|
2015-07-28 12:11:57 +02:00
|
|
|
cookiecutter:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2019-04-20 20:48:07 +00:00
|
|
|
- openstack/cookiecutter
|
2015-07-28 12:11:57 +02:00
|
|
|
debtcollector:
|
|
|
|
repos:
|
|
|
|
- openstack/debtcollector
|
|
|
|
devstack-plugin-amqp1:
|
2021-04-02 10:14:01 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/devstack-plugin-amqp1
|
2015-11-19 04:23:23 +09:00
|
|
|
devstack-plugin-kafka:
|
2021-04-02 10:14:01 +02:00
|
|
|
release-management: none
|
2015-11-19 04:23:23 +09:00
|
|
|
repos:
|
|
|
|
- openstack/devstack-plugin-kafka
|
2020-08-20 16:39:13 +02:00
|
|
|
etcd3gw:
|
|
|
|
repos:
|
|
|
|
- openstack/etcd3gw
|
2015-07-28 12:11:57 +02:00
|
|
|
futurist:
|
|
|
|
repos:
|
|
|
|
- openstack/futurist
|
2019-10-21 16:11:42 +02:00
|
|
|
microversion-parse:
|
|
|
|
repos:
|
|
|
|
- openstack/microversion-parse
|
2019-05-04 10:35:39 -06:00
|
|
|
openstack-doc-tools:
|
|
|
|
repos:
|
|
|
|
- openstack/openstack-doc-tools
|
|
|
|
openstackdocstheme:
|
|
|
|
repos:
|
|
|
|
- openstack/openstackdocstheme
|
|
|
|
os-api-ref:
|
|
|
|
repos:
|
|
|
|
- openstack/os-api-ref
|
2015-07-28 12:11:57 +02:00
|
|
|
oslo-cookiecutter:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-28 12:11:57 +02:00
|
|
|
repos:
|
2019-04-20 20:48:07 +00:00
|
|
|
- openstack/oslo-cookiecutter
|
2015-07-28 12:11:57 +02:00
|
|
|
oslo-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo-specs
|
|
|
|
oslo.cache:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.cache
|
|
|
|
oslo.concurrency:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.concurrency
|
|
|
|
oslo.config:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.config
|
|
|
|
oslo.context:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.context
|
|
|
|
oslo.db:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.db
|
|
|
|
oslo.i18n:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.i18n
|
2018-03-07 15:13:40 +00:00
|
|
|
oslo.limit:
|
|
|
|
repos:
|
|
|
|
- openstack/oslo.limit
|
2015-07-28 12:11:57 +02:00
|
|
|
oslo.log:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.log
|
|
|
|
oslo.messaging:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.messaging
|
2020-05-06 14:39:04 +02:00
|
|
|
oslo.metrics:
|
|
|
|
repos:
|
|
|
|
- openstack/oslo.metrics
|
2015-07-28 12:11:57 +02:00
|
|
|
oslo.middleware:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.middleware
|
|
|
|
oslo.policy:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.policy
|
2015-07-22 17:50:15 +10:00
|
|
|
oslo.privsep:
|
|
|
|
repos:
|
|
|
|
- openstack/oslo.privsep
|
2015-07-28 12:11:57 +02:00
|
|
|
oslo.reports:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.reports
|
|
|
|
oslo.rootwrap:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.rootwrap
|
|
|
|
oslo.serialization:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.serialization
|
|
|
|
oslo.service:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.service
|
2016-05-12 14:38:20 -07:00
|
|
|
oslo.tools:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2016-05-12 14:38:20 -07:00
|
|
|
repos:
|
|
|
|
- openstack/oslo.tools
|
2018-09-13 22:39:02 +00:00
|
|
|
oslo.upgradecheck:
|
|
|
|
repos:
|
|
|
|
- openstack/oslo.upgradecheck
|
2015-07-28 12:11:57 +02:00
|
|
|
oslo.utils:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.utils
|
|
|
|
oslo.versionedobjects:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.versionedobjects
|
|
|
|
oslo.vmware:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslo.vmware
|
|
|
|
oslotest:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/oslotest
|
2016-01-12 17:54:13 -08:00
|
|
|
osprofiler:
|
|
|
|
repos:
|
|
|
|
- openstack/osprofiler
|
2015-07-28 12:11:57 +02:00
|
|
|
pbr:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2019-04-20 20:48:07 +00:00
|
|
|
- openstack/pbr
|
2017-04-02 11:18:50 -05:00
|
|
|
sphinx-feature-classification:
|
|
|
|
repos:
|
|
|
|
- openstack/sphinx-feature-classification
|
2015-07-28 12:11:57 +02:00
|
|
|
stevedore:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/stevedore
|
|
|
|
taskflow:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/taskflow
|
|
|
|
tooz:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/tooz
|
2019-05-04 10:35:39 -06:00
|
|
|
whereto:
|
|
|
|
repos:
|
|
|
|
- openstack/whereto
|
2015-09-02 10:47:14 -05:00
|
|
|
Puppet OpenStack:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2022-02-24 16:35:09 -06:00
|
|
|
name: Takashi Kajinami
|
|
|
|
irc: tkajinam
|
2023-10-13 12:16:20 +09:00
|
|
|
email: kajinamit@oss.nttdata.com
|
2021-09-08 09:58:47 -05:00
|
|
|
appointed:
|
|
|
|
- yoga
|
2021-09-08 10:24:33 -05:00
|
|
|
irc-channel: puppet-openstack
|
2015-07-14 12:15:21 -05:00
|
|
|
service: Puppet modules for deployment
|
2015-04-01 20:25:24 -04:00
|
|
|
mission: >
|
|
|
|
The Puppet modules for OpenStack bring scalable and reliable IT automation
|
2015-07-16 15:57:12 +02:00
|
|
|
to OpenStack cloud deployments.
|
2017-08-22 19:08:18 +02:00
|
|
|
url: https://docs.openstack.org/puppet-openstack-guide/latest/
|
2015-07-16 15:57:12 +02:00
|
|
|
deliverables:
|
2015-09-10 12:57:56 -04:00
|
|
|
puppet-aodh:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-aodh
|
2015-09-30 17:47:24 -04:00
|
|
|
puppet-barbican:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-barbican
|
2015-07-16 15:57:12 +02:00
|
|
|
puppet-ceilometer:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-ceilometer
|
2016-03-31 12:57:24 -07:00
|
|
|
puppet-ceph:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-ceph
|
2015-07-16 15:57:12 +02:00
|
|
|
puppet-cinder:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-cinder
|
2016-09-27 17:53:08 +02:00
|
|
|
puppet-cloudkitty:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-cloudkitty
|
2015-07-16 15:57:12 +02:00
|
|
|
puppet-designate:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-designate
|
|
|
|
puppet-glance:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-glance
|
|
|
|
puppet-gnocchi:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-gnocchi
|
|
|
|
puppet-heat:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-heat
|
|
|
|
puppet-horizon:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-horizon
|
|
|
|
puppet-ironic:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-ironic
|
|
|
|
puppet-keystone:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-keystone
|
2015-10-29 18:22:03 +00:00
|
|
|
puppet-magnum:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-magnum
|
2015-07-16 15:57:12 +02:00
|
|
|
puppet-manila:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-manila
|
|
|
|
puppet-mistral:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-mistral
|
|
|
|
puppet-neutron:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-neutron
|
|
|
|
puppet-nova:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-nova
|
2016-01-26 18:37:46 -05:00
|
|
|
puppet-octavia:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-octavia
|
2015-07-16 15:57:12 +02:00
|
|
|
puppet-openstack-cookiecutter:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
|
|
|
- openstack/puppet-openstack-cookiecutter
|
2016-03-23 16:09:19 -04:00
|
|
|
puppet-openstack-guide:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2016-03-23 16:09:19 -04:00
|
|
|
repos:
|
|
|
|
- openstack/puppet-openstack-guide
|
2015-07-16 15:57:12 +02:00
|
|
|
puppet-openstack-integration:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-openstack-integration
|
|
|
|
puppet-openstack_extras:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-openstack_extras
|
2015-09-29 16:01:02 -04:00
|
|
|
puppet-openstack_spec_helper:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-openstack_spec_helper
|
2015-07-16 15:57:12 +02:00
|
|
|
puppet-openstacklib:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-openstacklib
|
2016-01-21 10:43:15 -05:00
|
|
|
puppet-oslo:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-oslo
|
2016-03-09 09:47:48 -05:00
|
|
|
puppet-ovn:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-ovn
|
2018-09-15 11:36:31 -06:00
|
|
|
puppet-placement:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-placement
|
2015-07-16 15:57:12 +02:00
|
|
|
puppet-swift:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-swift
|
|
|
|
puppet-tempest:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-tempest
|
|
|
|
puppet-trove:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-trove
|
2015-12-06 08:59:45 +02:00
|
|
|
puppet-vitrage:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-vitrage
|
2015-07-16 15:57:12 +02:00
|
|
|
puppet-vswitch:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-vswitch
|
2016-05-30 13:50:33 +00:00
|
|
|
puppet-watcher:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-watcher
|
2015-07-16 15:57:12 +02:00
|
|
|
puppet-zaqar:
|
|
|
|
repos:
|
|
|
|
- openstack/puppet-zaqar
|
2015-07-28 12:11:57 +02:00
|
|
|
Quality Assurance:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2021-03-10 09:32:23 -08:00
|
|
|
name: Martin Kopec
|
2021-09-08 10:21:35 -05:00
|
|
|
irc: kopecmartin
|
2021-03-10 09:32:23 -08:00
|
|
|
email: mkopec@redhat.com
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-qa
|
2015-04-17 13:33:21 -05:00
|
|
|
mission: >
|
2015-07-28 12:11:57 +02:00
|
|
|
Develop, maintain, and initiate tools and plans to ensure the upstream
|
|
|
|
stability and quality of OpenStack, and its release readiness at any point
|
|
|
|
during the release cycle.
|
|
|
|
url: https://wiki.openstack.org/wiki/QA
|
|
|
|
deliverables:
|
|
|
|
bashate:
|
|
|
|
repos:
|
2019-04-20 20:48:07 +00:00
|
|
|
- openstack/bashate
|
2016-11-07 14:58:48 +09:00
|
|
|
coverage2sql:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2016-11-07 14:58:48 +09:00
|
|
|
repos:
|
|
|
|
- openstack/coverage2sql
|
2015-07-28 12:11:57 +02:00
|
|
|
devstack:
|
|
|
|
repos:
|
2019-04-20 20:48:07 +00:00
|
|
|
- openstack/devstack
|
2016-03-03 16:04:34 -05:00
|
|
|
devstack-plugin-ceph:
|
|
|
|
repos:
|
|
|
|
- openstack/devstack-plugin-ceph
|
2015-07-28 12:11:57 +02:00
|
|
|
devstack-plugin-cookiecutter:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-28 12:11:57 +02:00
|
|
|
repos:
|
2019-04-20 20:48:07 +00:00
|
|
|
- openstack/devstack-plugin-cookiecutter
|
2020-03-09 18:45:24 +08:00
|
|
|
devstack-plugin-open-cas:
|
|
|
|
release-management: none
|
|
|
|
repos:
|
|
|
|
- openstack/devstack-plugin-open-cas
|
2025-01-23 16:59:33 +05:30
|
|
|
devstack-plugin-prometheus:
|
|
|
|
release-management: none
|
|
|
|
repos:
|
|
|
|
- openstack/devstack-plugin-prometheus
|
2017-01-18 10:33:33 -05:00
|
|
|
devstack-tools:
|
|
|
|
repos:
|
|
|
|
- openstack/devstack-tools
|
2015-07-28 12:11:57 +02:00
|
|
|
devstack-vagrant:
|
2018-12-05 10:23:38 +01:00
|
|
|
release-management: none
|
2015-07-28 12:11:57 +02:00
|
|
|
repos:
|
2019-04-20 20:48:07 +00:00
|
|
|
- openstack/devstack-vagrant
|
2015-07-28 12:11:57 +02:00
|
|
|
eslint-config-openstack:
|
|
|
|
repos:
|
|
|
|
- openstack/eslint-config-openstack
|
|
|
|
grenade:
|
|
|
|
repos:
|
2019-04-20 20:48:07 +00:00
|
|
|
- openstack/grenade
|
2015-07-28 12:11:57 +02:00
|
|
|
hacking:
|
|
|
|
repos:
|
2019-04-20 20:48:07 +00:00
|
|
|
- openstack/hacking
|
2016-06-09 14:57:54 -06:00
|
|
|
karma-subunit-reporter:
|
|
|
|
repos:
|
|
|
|
- openstack/karma-subunit-reporter
|
2015-12-05 09:30:11 -08:00
|
|
|
os-performance-tools:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-12-05 09:30:11 -08:00
|
|
|
repos:
|
|
|
|
- openstack/os-performance-tools
|
2015-07-28 12:11:57 +02:00
|
|
|
os-testr:
|
|
|
|
repos:
|
|
|
|
- openstack/os-testr
|
|
|
|
qa-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-28 12:11:57 +02:00
|
|
|
repos:
|
|
|
|
- openstack/qa-specs
|
2015-08-26 16:40:32 -06:00
|
|
|
stackviz:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-08-26 16:40:32 -06:00
|
|
|
repos:
|
|
|
|
- openstack/stackviz
|
2015-07-28 12:11:57 +02:00
|
|
|
tempest:
|
|
|
|
repos:
|
|
|
|
- openstack/tempest
|
2015-08-03 10:52:40 +02:00
|
|
|
tempest-plugin-cookiecutter:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-08-03 10:52:40 +02:00
|
|
|
repos:
|
2015-09-25 15:42:12 -04:00
|
|
|
- openstack/tempest-plugin-cookiecutter
|
2018-01-12 01:38:26 +00:00
|
|
|
tempest-stress:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2018-01-12 01:38:26 +00:00
|
|
|
repos:
|
|
|
|
- openstack/tempest-stress
|
2018-02-05 16:01:57 +00:00
|
|
|
devstack-plugin-container:
|
|
|
|
repos:
|
|
|
|
- openstack/devstack-plugin-container
|
2020-03-07 20:56:01 -06:00
|
|
|
devstack-plugin-nfs:
|
|
|
|
release-management: none
|
|
|
|
repos:
|
|
|
|
- openstack/devstack-plugin-nfs
|
2020-03-23 11:44:53 -04:00
|
|
|
whitebox-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/whitebox-tempest-plugin
|
2015-09-02 10:47:14 -05:00
|
|
|
rally:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2024-03-22 09:00:21 +09:00
|
|
|
name: Andrey Kurilin
|
2023-03-21 12:24:44 +01:00
|
|
|
irc: andreykurilin
|
|
|
|
email: andr.kurilin@gmail.com
|
2020-04-03 11:59:17 -05:00
|
|
|
appointed:
|
|
|
|
- victoria
|
2023-03-21 12:24:44 +01:00
|
|
|
- '2023.2'
|
2023-03-21 12:24:44 +01:00
|
|
|
- '2024.1'
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-rally
|
|
|
|
service: Benchmark service
|
|
|
|
mission: >
|
|
|
|
To provide a framework for performance analysis and benchmarking of
|
|
|
|
individual OpenStack components as well as full production OpenStack
|
|
|
|
cloud deployments.
|
|
|
|
url: https://wiki.openstack.org/wiki/Rally
|
|
|
|
deliverables:
|
|
|
|
rally:
|
|
|
|
repos:
|
|
|
|
- openstack/rally
|
2019-12-18 15:17:57 +01:00
|
|
|
rally-openstack:
|
|
|
|
repos:
|
2018-02-19 15:24:21 +02:00
|
|
|
- openstack/rally-openstack
|
2019-12-18 15:17:57 +01:00
|
|
|
performance-docs:
|
|
|
|
release-management: none
|
|
|
|
repos:
|
2015-12-03 13:54:50 +03:00
|
|
|
- openstack/performance-docs
|
2015-12-09 17:52:56 +01:00
|
|
|
Release Management:
|
2024-08-09 14:08:25 +02:00
|
|
|
leadership_type: distributed
|
2015-12-09 17:52:56 +01:00
|
|
|
irc-channel: openstack-release
|
2015-07-28 12:11:57 +02:00
|
|
|
mission: >
|
2015-12-09 17:52:56 +01:00
|
|
|
Coordinating the release of OpenStack deliverables, by defining the
|
|
|
|
overall development cycle, release models, publication processes,
|
|
|
|
versioning rules and tools, then enabling project teams to produce
|
|
|
|
their own releases.
|
|
|
|
url: https://wiki.openstack.org/wiki/Release_Management
|
2015-07-28 12:11:57 +02:00
|
|
|
deliverables:
|
2016-02-10 16:02:45 -05:00
|
|
|
release-test:
|
|
|
|
repos:
|
2016-03-21 15:35:28 -07:00
|
|
|
- openstack/release-test
|
2015-07-28 12:11:57 +02:00
|
|
|
releases:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-28 12:11:57 +02:00
|
|
|
repos:
|
|
|
|
- openstack/releases
|
2015-09-03 13:04:47 +00:00
|
|
|
reno:
|
|
|
|
repos:
|
|
|
|
- openstack/reno
|
2015-07-28 12:11:57 +02:00
|
|
|
specs-cookiecutter:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-28 12:11:57 +02:00
|
|
|
repos:
|
2019-04-20 20:48:07 +00:00
|
|
|
- openstack/specs-cookiecutter
|
2024-08-09 14:08:25 +02:00
|
|
|
liaisons:
|
|
|
|
release:
|
|
|
|
- name: Thierry Carrez
|
|
|
|
irc: ttx
|
|
|
|
email: thierry@openstack.org
|
|
|
|
- name: Elod Illes
|
|
|
|
irc: elodilles
|
|
|
|
email: elod.illes@est.tech
|
|
|
|
- name: Jens Harbott
|
|
|
|
irc: frickler
|
|
|
|
email: frickler@offenerstapel.de
|
|
|
|
tact-sig:
|
|
|
|
- name: Jeremy Stanley
|
|
|
|
irc: fungi
|
|
|
|
email: fungi@yuggoth.org
|
|
|
|
security:
|
|
|
|
- name: Thierry Carrez
|
|
|
|
irc: ttx
|
|
|
|
email: thierry@openstack.org
|
|
|
|
tc-liaison:
|
|
|
|
- name: Ghanshyam Mann
|
|
|
|
irc: gmann
|
|
|
|
email: gmann@ghanshyammann.com
|
2016-08-22 13:23:21 +10:00
|
|
|
requirements:
|
2022-08-25 18:22:15 -07:00
|
|
|
leadership_type: distributed
|
2016-08-22 13:23:21 +10:00
|
|
|
irc-channel: openstack-requirements
|
2019-11-20 12:14:56 +01:00
|
|
|
service: Common dependency management
|
2016-08-22 13:23:21 +10:00
|
|
|
mission: >
|
2016-08-31 11:07:20 +02:00
|
|
|
Coordinating and converging the libraries used by OpenStack projects,
|
|
|
|
while ensuring that all libraries are compatible both technically and
|
|
|
|
from a licensing standpoint.
|
2016-08-22 13:23:21 +10:00
|
|
|
url: https://wiki.openstack.org/wiki/Requirements
|
|
|
|
deliverables:
|
|
|
|
requirements:
|
|
|
|
repos:
|
|
|
|
- openstack/requirements
|
2022-08-25 18:22:15 -07:00
|
|
|
liaisons:
|
|
|
|
release:
|
|
|
|
- name: Matthew Thode
|
|
|
|
irc: prometheanfire
|
|
|
|
email: mthode@mthode.org
|
|
|
|
- name: Tony Breeds
|
|
|
|
irc: tonyb
|
|
|
|
email: tony@bakeyournoodle.com
|
|
|
|
tact-sig:
|
|
|
|
- name: Matthew Thode
|
|
|
|
irc: prometheanfire
|
|
|
|
email: mthode@mthode.org
|
|
|
|
- name: Tony Breeds
|
|
|
|
irc: tonyb
|
|
|
|
email: tony@bakeyournoodle.com
|
|
|
|
security:
|
|
|
|
- name: Matthew Thode
|
|
|
|
irc: prometheanfire
|
|
|
|
email: mthode@mthode.org
|
|
|
|
- name: Tony Breeds
|
|
|
|
irc: tonyb
|
|
|
|
email: tony@bakeyournoodle.com
|
2024-06-03 11:07:41 -07:00
|
|
|
tc-liaison:
|
|
|
|
- name: Ghanshyam Mann
|
|
|
|
irc: gmann
|
|
|
|
email: gmann@ghanshyammann.com
|
|
|
|
- name: Dmitriy Rabotyagov
|
|
|
|
irc: noonedeadpunk
|
|
|
|
email: noonedeadpunk@gmail.com
|
Add Skyline as an official project
Skyline is an OpenStack dashboard optimized by UI and UE.
It has a modern technology stack and ecology, is easier for
developers to maintain and operate by users, and has higher
concurrency performance.
Here are two videos to preview Skyline:
- Skyline technical overview[1].
- Skyline dashboard operating demo[2].
Skyline has the following technical advantages:
1. Separation of concerns, front-end focus on functional design
and user experience, back-end focus on data logic.
2. Embrace modern browser technology and ecology: React, Ant Design,
and Mobx.
3. Most functions directly call OpenStack-API, the call chain is
simple, the logic is clearer, and the API responds quickly.
4. Use React component to process rendering, the page display
process is fast and smooth, bringing users a better UI and UE
experience.
At present, Skyline has completed the function development of
OpenStack core component, as well as most of the functions of
VPNaaS, Octavia and other components.
corresponding automated test jobs[3][4] are also integrated on
Zuul, and there is good code coverage.
Devstack deployment integration has also been completed, and
integration of kolla and kolla-ansible will complete pending
patch[5][6] after Skyline becomes an official project.
Skyline’s next roadmap will be to cover all existing functions
of Horizon and complete the page development of other OpenStack
components.
[1] https://www.youtube.com/watch?v=Ro8tROYKDlE
[2] https://www.youtube.com/watch?v=pFAJLwzxv0A
[3] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-apiserver
[4] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-console
[5] https://review.opendev.org/c/openstack/kolla/+/810796
[6] https://review.opendev.org/c/openstack/kolla-ansible/+/810566
Change-Id: Ib91a241c64351c5e69023b2523408c75b80ff74d
2021-10-14 23:42:20 +08:00
|
|
|
skyline:
|
|
|
|
ptl:
|
2024-04-05 10:35:11 +08:00
|
|
|
name: Wenxiang Wu
|
|
|
|
irc: wu_wenxiang
|
|
|
|
email: wu.wenxiang@99cloud.net
|
2022-02-27 18:49:14 -06:00
|
|
|
appointed:
|
|
|
|
- zed
|
2024-09-19 23:17:25 +09:00
|
|
|
- '2024.1'
|
|
|
|
- '2024.2'
|
Add Skyline as an official project
Skyline is an OpenStack dashboard optimized by UI and UE.
It has a modern technology stack and ecology, is easier for
developers to maintain and operate by users, and has higher
concurrency performance.
Here are two videos to preview Skyline:
- Skyline technical overview[1].
- Skyline dashboard operating demo[2].
Skyline has the following technical advantages:
1. Separation of concerns, front-end focus on functional design
and user experience, back-end focus on data logic.
2. Embrace modern browser technology and ecology: React, Ant Design,
and Mobx.
3. Most functions directly call OpenStack-API, the call chain is
simple, the logic is clearer, and the API responds quickly.
4. Use React component to process rendering, the page display
process is fast and smooth, bringing users a better UI and UE
experience.
At present, Skyline has completed the function development of
OpenStack core component, as well as most of the functions of
VPNaaS, Octavia and other components.
corresponding automated test jobs[3][4] are also integrated on
Zuul, and there is good code coverage.
Devstack deployment integration has also been completed, and
integration of kolla and kolla-ansible will complete pending
patch[5][6] after Skyline becomes an official project.
Skyline’s next roadmap will be to cover all existing functions
of Horizon and complete the page development of other OpenStack
components.
[1] https://www.youtube.com/watch?v=Ro8tROYKDlE
[2] https://www.youtube.com/watch?v=pFAJLwzxv0A
[3] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-apiserver
[4] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-console
[5] https://review.opendev.org/c/openstack/kolla/+/810796
[6] https://review.opendev.org/c/openstack/kolla-ansible/+/810566
Change-Id: Ib91a241c64351c5e69023b2523408c75b80ff74d
2021-10-14 23:42:20 +08:00
|
|
|
irc-channel: openstack-skyline
|
|
|
|
service: Dashboard
|
|
|
|
mission: >
|
|
|
|
Skyline is an OpenStack dashboard optimized by UI and UX.
|
|
|
|
It has a modern technology stack and ecosystem, is easier for
|
|
|
|
developers to maintain and operate by users, and has higher
|
|
|
|
concurrency performance.
|
|
|
|
url: https://wiki.openstack.org/wiki/Skyline
|
|
|
|
deliverables:
|
|
|
|
skyline-apiserver:
|
|
|
|
repos:
|
2022-01-24 21:32:52 -06:00
|
|
|
- openstack/skyline-apiserver
|
Add Skyline as an official project
Skyline is an OpenStack dashboard optimized by UI and UE.
It has a modern technology stack and ecology, is easier for
developers to maintain and operate by users, and has higher
concurrency performance.
Here are two videos to preview Skyline:
- Skyline technical overview[1].
- Skyline dashboard operating demo[2].
Skyline has the following technical advantages:
1. Separation of concerns, front-end focus on functional design
and user experience, back-end focus on data logic.
2. Embrace modern browser technology and ecology: React, Ant Design,
and Mobx.
3. Most functions directly call OpenStack-API, the call chain is
simple, the logic is clearer, and the API responds quickly.
4. Use React component to process rendering, the page display
process is fast and smooth, bringing users a better UI and UE
experience.
At present, Skyline has completed the function development of
OpenStack core component, as well as most of the functions of
VPNaaS, Octavia and other components.
corresponding automated test jobs[3][4] are also integrated on
Zuul, and there is good code coverage.
Devstack deployment integration has also been completed, and
integration of kolla and kolla-ansible will complete pending
patch[5][6] after Skyline becomes an official project.
Skyline’s next roadmap will be to cover all existing functions
of Horizon and complete the page development of other OpenStack
components.
[1] https://www.youtube.com/watch?v=Ro8tROYKDlE
[2] https://www.youtube.com/watch?v=pFAJLwzxv0A
[3] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-apiserver
[4] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-console
[5] https://review.opendev.org/c/openstack/kolla/+/810796
[6] https://review.opendev.org/c/openstack/kolla-ansible/+/810566
Change-Id: Ib91a241c64351c5e69023b2523408c75b80ff74d
2021-10-14 23:42:20 +08:00
|
|
|
skyline-console:
|
|
|
|
repos:
|
2022-01-24 21:32:52 -06:00
|
|
|
- openstack/skyline-console
|
Storlets to become official - Proposed governance change
[1] Has a summary as well detailed desciption of all that
the team has done following the comments given in August.
The below is the original commit message describing the project
(with some necessary updates).
The project offers the ability to run user defined functions
within Openstack Swift nodes. For more info please refer to
the docs (built from source) here:
http://storlets.readthedocs.io/en/latest/
The project combines a Swift middleware together with Docker
to execute the user defined Java/Python programs inside a container,
which in turn runs on Swift nodes. In addition the repository
has several sample storlets used for functional tests
(as well as examples).
Open Source:
License: Apache 2.0 license.
Libraries: On top of what swift requires, storlets make use
of docker, OpenJDK JRE-8 plus standard Java libs. Non standard java
libs that we use are:
- simple-json (Apache 2.0 lic)
- logback-classic, logback-core (EPL v1.0 and the LGPL 2.1)
- slf4j-api (MIT lic)
Open Community:
We are currently 5 active contributors, 1 from IBM, 3 from NTT
and this commit author (independent developer) we hold weekly
meetings in #openstack-meetings, with agenda posted in [2]
Open Development:
The project is integrated with the Openstack CI, and was created
according to Openstack "Project Creator's Guide".
We work closely with the Swift team, and one of our cores is also a
core in Swift. Yours truly was elected to be the first PTL by the team.
Our current and near term goals are documented in our wiki.
Open Design:
So far our design meetings were alongside those of Swift. Otherwise,
most of the work is done through Gerrit and our IRC channel. In addition
we have the [storlets] prefix within the openstack-dev ML.
Interoperability:
Our user facing APIs are actually an extension of Swift’s API
(http://storlets.readthedocs.io/en/latest/api/overview_api.html)
To the best of my understanding, storlets being a Swift extension
do not pose any authentication restriction or integration issues.
Testing interface:
We have regular unit/functional/python style tests running with every
review request. We generate a source tar ball of the python code as
part of the deployment scripts, and generate documentation.
The project has been presented in the Paris, Tokyo and Barcelona
summits.
Reasoning about languages used:
We use Python, Java and C.
Java and Python are used as the programming language for storlets.
That is, the user defined code running inside the Docker container on the storage
node can be Java or Python. Java is widly used in the Big-Data analytics world as
well as in the media world (as demonstrated in our Paris and Barcelona talks).
Similar to OpenStack SDK we would like to support a set of bindings and we are
currently working on a Python 'binding'.
We use C for passing fds between Swift and the Dockered storlet, and only for that
purpose. Unfortunately, neither python 2.7 nor Java have support of passing fds
between processes (over UDS). This feature is crucial for us as we want to keep the
Docker container isolated as much as possible. Apparently, python 3.x does have
such support, but seems that on the Java side we will still need the C code.
Python is used for the Swift middleware. The middleware intercepts Swift requests
that wish to invoke a storlet on some data and routes the data to/from the storlet.
JRE Dependency:
We are now using OpenJDK.
Is this being implemented anywhere else:
There have been https://zerovm.readthedocs.io/zerocloud/overview.html
However, after being aquired by Rackspace they have 'disappeared'.
We have tried to design storlets so that different runtime sandboxes
are pluggable with Docker containers being our first implementation
of a runtime sandbox. Having said that, as long as we have only one
implementation, this design has not been put to the test.
There is also what has been a proprietary solution and now opensourced
https://www.joyent.com/manta
Joyent was aquired by Samsung this year.
[1] https://etherpad.openstack.org/p/storlets-big-tent
[2] https://wiki.openstack.org/wiki/Meetings/Storlets
Change-Id: Icf9dd346c731dd7378ba9015bd3cbc31e4246a6a
2016-08-10 10:22:34 +03:00
|
|
|
storlets:
|
|
|
|
ptl:
|
2020-04-03 15:57:32 +09:00
|
|
|
name: Takashi Kajinami
|
|
|
|
irc: tkajinam
|
2023-10-13 12:16:20 +09:00
|
|
|
email: kajinamit@oss.nttdata.com
|
Storlets to become official - Proposed governance change
[1] Has a summary as well detailed desciption of all that
the team has done following the comments given in August.
The below is the original commit message describing the project
(with some necessary updates).
The project offers the ability to run user defined functions
within Openstack Swift nodes. For more info please refer to
the docs (built from source) here:
http://storlets.readthedocs.io/en/latest/
The project combines a Swift middleware together with Docker
to execute the user defined Java/Python programs inside a container,
which in turn runs on Swift nodes. In addition the repository
has several sample storlets used for functional tests
(as well as examples).
Open Source:
License: Apache 2.0 license.
Libraries: On top of what swift requires, storlets make use
of docker, OpenJDK JRE-8 plus standard Java libs. Non standard java
libs that we use are:
- simple-json (Apache 2.0 lic)
- logback-classic, logback-core (EPL v1.0 and the LGPL 2.1)
- slf4j-api (MIT lic)
Open Community:
We are currently 5 active contributors, 1 from IBM, 3 from NTT
and this commit author (independent developer) we hold weekly
meetings in #openstack-meetings, with agenda posted in [2]
Open Development:
The project is integrated with the Openstack CI, and was created
according to Openstack "Project Creator's Guide".
We work closely with the Swift team, and one of our cores is also a
core in Swift. Yours truly was elected to be the first PTL by the team.
Our current and near term goals are documented in our wiki.
Open Design:
So far our design meetings were alongside those of Swift. Otherwise,
most of the work is done through Gerrit and our IRC channel. In addition
we have the [storlets] prefix within the openstack-dev ML.
Interoperability:
Our user facing APIs are actually an extension of Swift’s API
(http://storlets.readthedocs.io/en/latest/api/overview_api.html)
To the best of my understanding, storlets being a Swift extension
do not pose any authentication restriction or integration issues.
Testing interface:
We have regular unit/functional/python style tests running with every
review request. We generate a source tar ball of the python code as
part of the deployment scripts, and generate documentation.
The project has been presented in the Paris, Tokyo and Barcelona
summits.
Reasoning about languages used:
We use Python, Java and C.
Java and Python are used as the programming language for storlets.
That is, the user defined code running inside the Docker container on the storage
node can be Java or Python. Java is widly used in the Big-Data analytics world as
well as in the media world (as demonstrated in our Paris and Barcelona talks).
Similar to OpenStack SDK we would like to support a set of bindings and we are
currently working on a Python 'binding'.
We use C for passing fds between Swift and the Dockered storlet, and only for that
purpose. Unfortunately, neither python 2.7 nor Java have support of passing fds
between processes (over UDS). This feature is crucial for us as we want to keep the
Docker container isolated as much as possible. Apparently, python 3.x does have
such support, but seems that on the Java side we will still need the C code.
Python is used for the Swift middleware. The middleware intercepts Swift requests
that wish to invoke a storlet on some data and routes the data to/from the storlet.
JRE Dependency:
We are now using OpenJDK.
Is this being implemented anywhere else:
There have been https://zerovm.readthedocs.io/zerocloud/overview.html
However, after being aquired by Rackspace they have 'disappeared'.
We have tried to design storlets so that different runtime sandboxes
are pluggable with Docker containers being our first implementation
of a runtime sandbox. Having said that, as long as we have only one
implementation, this design has not been put to the test.
There is also what has been a proprietary solution and now opensourced
https://www.joyent.com/manta
Joyent was aquired by Samsung this year.
[1] https://etherpad.openstack.org/p/storlets-big-tent
[2] https://wiki.openstack.org/wiki/Meetings/Storlets
Change-Id: Icf9dd346c731dd7378ba9015bd3cbc31e4246a6a
2016-08-10 10:22:34 +03:00
|
|
|
irc-channel: openstack-storlets
|
|
|
|
service: Compute inside Object Storage service
|
|
|
|
mission: >
|
|
|
|
To enable a user friendly, cost effective scalable and secure way for
|
|
|
|
executing storage centric user defined functions near the data within
|
|
|
|
OpenStack Swift
|
|
|
|
url: https://wiki.openstack.org/wiki/Storlets
|
|
|
|
deliverables:
|
|
|
|
storlets:
|
|
|
|
repos:
|
|
|
|
- openstack/storlets
|
2023-04-20 09:51:47 +01:00
|
|
|
sunbeam:
|
|
|
|
ptl:
|
2024-09-19 23:17:25 +09:00
|
|
|
name: Guillaume Boutry
|
|
|
|
irc: gboutry
|
|
|
|
email: guillaume.boutry@canonical.com
|
2023-09-26 20:27:42 +01:00
|
|
|
appointed:
|
2024-09-19 23:17:25 +09:00
|
|
|
- '2024.1'
|
2023-04-20 09:51:47 +01:00
|
|
|
irc-channel: openstack-sunbeam
|
|
|
|
service: Deployment and operational tooling for OpenStack
|
|
|
|
mission: >
|
|
|
|
To enable deployment and operation of OpenStack at any scale - from a
|
|
|
|
single node to small scale edge deployments and large scale clouds
|
|
|
|
containing many 1000's of hypervisors - leveraging a hybrid deployment
|
|
|
|
model using Juju to manage both Kubernetes components and machine based
|
|
|
|
components through the use of charms.
|
2023-12-12 11:21:04 +00:00
|
|
|
url: https://opendev.org/openstack/sunbeam-charms/src/branch/main/ops-sunbeam/README.rst
|
2023-04-20 09:51:47 +01:00
|
|
|
deliverables:
|
|
|
|
sunbeam-charms:
|
|
|
|
release-management: external
|
|
|
|
repos:
|
2023-11-06 11:26:52 +00:00
|
|
|
- openstack/sunbeam-charms
|
2015-09-02 10:47:14 -05:00
|
|
|
swift:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2024-09-10 20:50:22 -07:00
|
|
|
name: Tim Burke
|
|
|
|
irc: timburke
|
|
|
|
email: tburke@nvidia.com
|
2020-04-03 12:05:15 -05:00
|
|
|
appointed:
|
|
|
|
- victoria
|
2023-03-10 18:14:20 -06:00
|
|
|
- '2023.1'
|
|
|
|
- '2023.2'
|
2024-09-10 20:50:22 -07:00
|
|
|
- '2025.1'
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-swift
|
2015-07-14 12:15:21 -05:00
|
|
|
service: Object Storage service
|
2019-08-08 13:51:39 +02:00
|
|
|
mission: >
|
|
|
|
Provide software for storing and retrieving lots of data
|
|
|
|
with a simple API. Built for scale and optimized for durability,
|
|
|
|
availability, and concurrency across the entire data set.
|
2015-07-28 12:11:57 +02:00
|
|
|
url: https://wiki.openstack.org/wiki/Swift
|
|
|
|
deliverables:
|
2019-05-04 12:40:30 -07:00
|
|
|
liberasurecode:
|
|
|
|
release-management: none
|
|
|
|
repos:
|
2019-05-31 17:52:47 +02:00
|
|
|
- openstack/liberasurecode
|
2019-05-04 12:40:30 -07:00
|
|
|
pyeclib:
|
|
|
|
release-management: none
|
|
|
|
repos:
|
2019-05-31 17:52:47 +02:00
|
|
|
- openstack/pyeclib
|
2015-07-28 12:11:57 +02:00
|
|
|
python-swiftclient:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/python-swiftclient
|
|
|
|
swift:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/swift
|
|
|
|
swift-bench:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/swift-bench
|
Applying Tacker for Big Tent
Tacker is an OpenStack ecosystem project with a mission to build NFV
Orchestration services for OpenStack. Tacker intends to support
life-cycle management of Network Services and Virtual Network Functions
(VNF). This service is designed to be compatible with the ETSI NFV
Architecture Framework [1].
NFV Orchestration consists of two major components - VNF Manager and NFV
Orchestrator. This project envisions to build both of these components
under the OpenStack platform. The VNF Manager (VNFM) handles the life-cycle
management of VNFs. It would instantiate VNFs on an OpenStack VIM and
facilitate configuration, monitoring, healing and scaling of the VMs
backing the VNF. Tacker plans to use many existing OpenStack services
to realize these features. NFV Orchestrator (NFVO) provides end-to-end
Network Service Orchestration. NFVO would in turn use VNFM component to
stand up the VNFs composed in a Network Service. NFVO will also render
VNF Forwarding Graphs (VNFFG) using a Service Function Chaining (SFC)
API across the instantiated VNFs. Tacker intends to use Neutron's
networking-sfc API [2] for this purpose.
NFV work flows are typically described using templates. The current
template schema of choice is TOSCA which is based on OASIS Standard [3].
At this moment TOSCA has the leading mindshare across VNF vendors and
operators. Tacker project is working closely with OASIS TOSCA NFV
standards group to shape the evolution of its Simple Profile for NFV [4].
The project is also leveraging and contributing to the Heat Translator's
tosca-parser work [5]. Beyond TOSCA other templating schemes can be
introduced in the future.
Tacker started as a ServiceVM project under Neutron during the Atlanta
Summit(2014). The evolution was slow and the interest totally dropped
after the neutron vendor plugin decomposition. However the project had
code assets for a template based lifecycle management of ServiceVMs.
The remaining Tacker team members met in early 2015, brainstormed and
eventually decided to pivot to ETSI NFV Orchestration use case. ETSI NFV
envisions an Information Model based Orchestration and Life Cycle
management of Virtual Network Functions. Since then Tacker developer
community grew over the last three cycles and now getting contributions
from a diverse set of participants [6] [7]. Tacker is also now actively
collaborating with downstream OPNFV Projects like SFC [8], Parser [9] and
Multisite [10]. Tacker functionality has been demonstrated in both
OpenStack Vancouver and Tokyo Summits and in the recent OPNFV Summit.
Tacker project strictly follows the Four Opens [11] suggested by OpenStack
Foundation. Tacker code has been developed under an Apache 2.0 license.
All code is submitted and reviewed through OpenStack Gerrit. The project
maintains a core team that approves all changes. Bugs are filed, reviewed
and tracked in Launchpad [12]. The project obeys the coordinated project
interfaces, including tox, pbr, global-requirements, etc. Tacker gate now
runs pep8, py27, docs tasks and a dsvm functional test.
In summary, before Tacker, operators are expected to string together custom
solutions using Heat, Ceilometer, etc. to achieve similar functionality.
Tacker reduces such duplicate and complex effort in the industry by bringing
together a community of NFV operators and VNF vendors to collaborate and
build out a template based workflow engine and a higher level OpenStack
"NFV Orchestration" API.
[1] https://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_nfv002v010201p.pdf
[2] http://docs.openstack.org/developer/networking-sfc/
[3] https://www.oasis-open.org/committees/tosca/
[4] http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/tosca-nfv-v1.0.html
[5] https://github.com/openstack/tosca-parser
[6] http://stackalytics.openstack.org/?project_type=openstack-others&module=tacker&metric=patches
[7] http://stackalytics.com/report/contribution/tacker/90
[8] https://wiki.opnfv.org/service_function_chaining
[9] https://wiki.opnfv.org/parser
[10] https://wiki.opnfv.org/multisite
[11] https://github.com/openstack/governance/blob/master/reference/opens.rst
[12] https://launchpad.net/tacker
[13] http://git.openstack.org/cgit/openstack/tacker
[14] http://git.openstack.org/cgit/openstack/python-tackerclient
[15] http://git.openstack.org/cgit/openstack/tacker-horizon
[16] http://git.openstack.org/cgit/openstack/tacker-specs
Change-Id: Idd7e9e8e4d428f9de28d8527e2f459b8ab8b8288
2016-02-04 00:12:41 +00:00
|
|
|
tacker:
|
|
|
|
ptl:
|
2020-04-11 04:34:34 +09:00
|
|
|
name: Yasufumi Ogawa
|
|
|
|
irc: yasufum
|
|
|
|
email: yasufum.o@gmail.com
|
|
|
|
appointed:
|
|
|
|
- victoria
|
Applying Tacker for Big Tent
Tacker is an OpenStack ecosystem project with a mission to build NFV
Orchestration services for OpenStack. Tacker intends to support
life-cycle management of Network Services and Virtual Network Functions
(VNF). This service is designed to be compatible with the ETSI NFV
Architecture Framework [1].
NFV Orchestration consists of two major components - VNF Manager and NFV
Orchestrator. This project envisions to build both of these components
under the OpenStack platform. The VNF Manager (VNFM) handles the life-cycle
management of VNFs. It would instantiate VNFs on an OpenStack VIM and
facilitate configuration, monitoring, healing and scaling of the VMs
backing the VNF. Tacker plans to use many existing OpenStack services
to realize these features. NFV Orchestrator (NFVO) provides end-to-end
Network Service Orchestration. NFVO would in turn use VNFM component to
stand up the VNFs composed in a Network Service. NFVO will also render
VNF Forwarding Graphs (VNFFG) using a Service Function Chaining (SFC)
API across the instantiated VNFs. Tacker intends to use Neutron's
networking-sfc API [2] for this purpose.
NFV work flows are typically described using templates. The current
template schema of choice is TOSCA which is based on OASIS Standard [3].
At this moment TOSCA has the leading mindshare across VNF vendors and
operators. Tacker project is working closely with OASIS TOSCA NFV
standards group to shape the evolution of its Simple Profile for NFV [4].
The project is also leveraging and contributing to the Heat Translator's
tosca-parser work [5]. Beyond TOSCA other templating schemes can be
introduced in the future.
Tacker started as a ServiceVM project under Neutron during the Atlanta
Summit(2014). The evolution was slow and the interest totally dropped
after the neutron vendor plugin decomposition. However the project had
code assets for a template based lifecycle management of ServiceVMs.
The remaining Tacker team members met in early 2015, brainstormed and
eventually decided to pivot to ETSI NFV Orchestration use case. ETSI NFV
envisions an Information Model based Orchestration and Life Cycle
management of Virtual Network Functions. Since then Tacker developer
community grew over the last three cycles and now getting contributions
from a diverse set of participants [6] [7]. Tacker is also now actively
collaborating with downstream OPNFV Projects like SFC [8], Parser [9] and
Multisite [10]. Tacker functionality has been demonstrated in both
OpenStack Vancouver and Tokyo Summits and in the recent OPNFV Summit.
Tacker project strictly follows the Four Opens [11] suggested by OpenStack
Foundation. Tacker code has been developed under an Apache 2.0 license.
All code is submitted and reviewed through OpenStack Gerrit. The project
maintains a core team that approves all changes. Bugs are filed, reviewed
and tracked in Launchpad [12]. The project obeys the coordinated project
interfaces, including tox, pbr, global-requirements, etc. Tacker gate now
runs pep8, py27, docs tasks and a dsvm functional test.
In summary, before Tacker, operators are expected to string together custom
solutions using Heat, Ceilometer, etc. to achieve similar functionality.
Tacker reduces such duplicate and complex effort in the industry by bringing
together a community of NFV operators and VNF vendors to collaborate and
build out a template based workflow engine and a higher level OpenStack
"NFV Orchestration" API.
[1] https://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_nfv002v010201p.pdf
[2] http://docs.openstack.org/developer/networking-sfc/
[3] https://www.oasis-open.org/committees/tosca/
[4] http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/tosca-nfv-v1.0.html
[5] https://github.com/openstack/tosca-parser
[6] http://stackalytics.openstack.org/?project_type=openstack-others&module=tacker&metric=patches
[7] http://stackalytics.com/report/contribution/tacker/90
[8] https://wiki.opnfv.org/service_function_chaining
[9] https://wiki.opnfv.org/parser
[10] https://wiki.opnfv.org/multisite
[11] https://github.com/openstack/governance/blob/master/reference/opens.rst
[12] https://launchpad.net/tacker
[13] http://git.openstack.org/cgit/openstack/tacker
[14] http://git.openstack.org/cgit/openstack/python-tackerclient
[15] http://git.openstack.org/cgit/openstack/tacker-horizon
[16] http://git.openstack.org/cgit/openstack/tacker-specs
Change-Id: Idd7e9e8e4d428f9de28d8527e2f459b8ab8b8288
2016-02-04 00:12:41 +00:00
|
|
|
irc-channel: tacker
|
|
|
|
service: NFV Orchestration service
|
|
|
|
mission: >
|
|
|
|
To implement Network Function Virtualization (NFV) Orchestration services
|
|
|
|
and libraries for end-to-end life-cycle management of Network Services
|
|
|
|
and Virtual Network Functions (VNFs).
|
|
|
|
url: https://wiki.openstack.org/wiki/Tacker
|
|
|
|
deliverables:
|
|
|
|
tacker:
|
|
|
|
repos:
|
|
|
|
- openstack/tacker
|
2016-08-04 16:29:09 -07:00
|
|
|
tacker-horizon:
|
|
|
|
repos:
|
|
|
|
- openstack/tacker-horizon
|
Applying Tacker for Big Tent
Tacker is an OpenStack ecosystem project with a mission to build NFV
Orchestration services for OpenStack. Tacker intends to support
life-cycle management of Network Services and Virtual Network Functions
(VNF). This service is designed to be compatible with the ETSI NFV
Architecture Framework [1].
NFV Orchestration consists of two major components - VNF Manager and NFV
Orchestrator. This project envisions to build both of these components
under the OpenStack platform. The VNF Manager (VNFM) handles the life-cycle
management of VNFs. It would instantiate VNFs on an OpenStack VIM and
facilitate configuration, monitoring, healing and scaling of the VMs
backing the VNF. Tacker plans to use many existing OpenStack services
to realize these features. NFV Orchestrator (NFVO) provides end-to-end
Network Service Orchestration. NFVO would in turn use VNFM component to
stand up the VNFs composed in a Network Service. NFVO will also render
VNF Forwarding Graphs (VNFFG) using a Service Function Chaining (SFC)
API across the instantiated VNFs. Tacker intends to use Neutron's
networking-sfc API [2] for this purpose.
NFV work flows are typically described using templates. The current
template schema of choice is TOSCA which is based on OASIS Standard [3].
At this moment TOSCA has the leading mindshare across VNF vendors and
operators. Tacker project is working closely with OASIS TOSCA NFV
standards group to shape the evolution of its Simple Profile for NFV [4].
The project is also leveraging and contributing to the Heat Translator's
tosca-parser work [5]. Beyond TOSCA other templating schemes can be
introduced in the future.
Tacker started as a ServiceVM project under Neutron during the Atlanta
Summit(2014). The evolution was slow and the interest totally dropped
after the neutron vendor plugin decomposition. However the project had
code assets for a template based lifecycle management of ServiceVMs.
The remaining Tacker team members met in early 2015, brainstormed and
eventually decided to pivot to ETSI NFV Orchestration use case. ETSI NFV
envisions an Information Model based Orchestration and Life Cycle
management of Virtual Network Functions. Since then Tacker developer
community grew over the last three cycles and now getting contributions
from a diverse set of participants [6] [7]. Tacker is also now actively
collaborating with downstream OPNFV Projects like SFC [8], Parser [9] and
Multisite [10]. Tacker functionality has been demonstrated in both
OpenStack Vancouver and Tokyo Summits and in the recent OPNFV Summit.
Tacker project strictly follows the Four Opens [11] suggested by OpenStack
Foundation. Tacker code has been developed under an Apache 2.0 license.
All code is submitted and reviewed through OpenStack Gerrit. The project
maintains a core team that approves all changes. Bugs are filed, reviewed
and tracked in Launchpad [12]. The project obeys the coordinated project
interfaces, including tox, pbr, global-requirements, etc. Tacker gate now
runs pep8, py27, docs tasks and a dsvm functional test.
In summary, before Tacker, operators are expected to string together custom
solutions using Heat, Ceilometer, etc. to achieve similar functionality.
Tacker reduces such duplicate and complex effort in the industry by bringing
together a community of NFV operators and VNF vendors to collaborate and
build out a template based workflow engine and a higher level OpenStack
"NFV Orchestration" API.
[1] https://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_nfv002v010201p.pdf
[2] http://docs.openstack.org/developer/networking-sfc/
[3] https://www.oasis-open.org/committees/tosca/
[4] http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/tosca-nfv-v1.0.html
[5] https://github.com/openstack/tosca-parser
[6] http://stackalytics.openstack.org/?project_type=openstack-others&module=tacker&metric=patches
[7] http://stackalytics.com/report/contribution/tacker/90
[8] https://wiki.opnfv.org/service_function_chaining
[9] https://wiki.opnfv.org/parser
[10] https://wiki.opnfv.org/multisite
[11] https://github.com/openstack/governance/blob/master/reference/opens.rst
[12] https://launchpad.net/tacker
[13] http://git.openstack.org/cgit/openstack/tacker
[14] http://git.openstack.org/cgit/openstack/python-tackerclient
[15] http://git.openstack.org/cgit/openstack/tacker-horizon
[16] http://git.openstack.org/cgit/openstack/tacker-specs
Change-Id: Idd7e9e8e4d428f9de28d8527e2f459b8ab8b8288
2016-02-04 00:12:41 +00:00
|
|
|
python-tackerclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-tackerclient
|
|
|
|
tacker-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
Applying Tacker for Big Tent
Tacker is an OpenStack ecosystem project with a mission to build NFV
Orchestration services for OpenStack. Tacker intends to support
life-cycle management of Network Services and Virtual Network Functions
(VNF). This service is designed to be compatible with the ETSI NFV
Architecture Framework [1].
NFV Orchestration consists of two major components - VNF Manager and NFV
Orchestrator. This project envisions to build both of these components
under the OpenStack platform. The VNF Manager (VNFM) handles the life-cycle
management of VNFs. It would instantiate VNFs on an OpenStack VIM and
facilitate configuration, monitoring, healing and scaling of the VMs
backing the VNF. Tacker plans to use many existing OpenStack services
to realize these features. NFV Orchestrator (NFVO) provides end-to-end
Network Service Orchestration. NFVO would in turn use VNFM component to
stand up the VNFs composed in a Network Service. NFVO will also render
VNF Forwarding Graphs (VNFFG) using a Service Function Chaining (SFC)
API across the instantiated VNFs. Tacker intends to use Neutron's
networking-sfc API [2] for this purpose.
NFV work flows are typically described using templates. The current
template schema of choice is TOSCA which is based on OASIS Standard [3].
At this moment TOSCA has the leading mindshare across VNF vendors and
operators. Tacker project is working closely with OASIS TOSCA NFV
standards group to shape the evolution of its Simple Profile for NFV [4].
The project is also leveraging and contributing to the Heat Translator's
tosca-parser work [5]. Beyond TOSCA other templating schemes can be
introduced in the future.
Tacker started as a ServiceVM project under Neutron during the Atlanta
Summit(2014). The evolution was slow and the interest totally dropped
after the neutron vendor plugin decomposition. However the project had
code assets for a template based lifecycle management of ServiceVMs.
The remaining Tacker team members met in early 2015, brainstormed and
eventually decided to pivot to ETSI NFV Orchestration use case. ETSI NFV
envisions an Information Model based Orchestration and Life Cycle
management of Virtual Network Functions. Since then Tacker developer
community grew over the last three cycles and now getting contributions
from a diverse set of participants [6] [7]. Tacker is also now actively
collaborating with downstream OPNFV Projects like SFC [8], Parser [9] and
Multisite [10]. Tacker functionality has been demonstrated in both
OpenStack Vancouver and Tokyo Summits and in the recent OPNFV Summit.
Tacker project strictly follows the Four Opens [11] suggested by OpenStack
Foundation. Tacker code has been developed under an Apache 2.0 license.
All code is submitted and reviewed through OpenStack Gerrit. The project
maintains a core team that approves all changes. Bugs are filed, reviewed
and tracked in Launchpad [12]. The project obeys the coordinated project
interfaces, including tox, pbr, global-requirements, etc. Tacker gate now
runs pep8, py27, docs tasks and a dsvm functional test.
In summary, before Tacker, operators are expected to string together custom
solutions using Heat, Ceilometer, etc. to achieve similar functionality.
Tacker reduces such duplicate and complex effort in the industry by bringing
together a community of NFV operators and VNF vendors to collaborate and
build out a template based workflow engine and a higher level OpenStack
"NFV Orchestration" API.
[1] https://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_nfv002v010201p.pdf
[2] http://docs.openstack.org/developer/networking-sfc/
[3] https://www.oasis-open.org/committees/tosca/
[4] http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/tosca-nfv-v1.0.html
[5] https://github.com/openstack/tosca-parser
[6] http://stackalytics.openstack.org/?project_type=openstack-others&module=tacker&metric=patches
[7] http://stackalytics.com/report/contribution/tacker/90
[8] https://wiki.opnfv.org/service_function_chaining
[9] https://wiki.opnfv.org/parser
[10] https://wiki.opnfv.org/multisite
[11] https://github.com/openstack/governance/blob/master/reference/opens.rst
[12] https://launchpad.net/tacker
[13] http://git.openstack.org/cgit/openstack/tacker
[14] http://git.openstack.org/cgit/openstack/python-tackerclient
[15] http://git.openstack.org/cgit/openstack/tacker-horizon
[16] http://git.openstack.org/cgit/openstack/tacker-specs
Change-Id: Idd7e9e8e4d428f9de28d8527e2f459b8ab8b8288
2016-02-04 00:12:41 +00:00
|
|
|
repos:
|
|
|
|
- openstack/tacker-specs
|
2023-03-02 11:48:24 +09:00
|
|
|
heat-translator:
|
|
|
|
repos:
|
|
|
|
- openstack/heat-translator
|
|
|
|
tosca-parser:
|
|
|
|
repos:
|
|
|
|
- openstack/tosca-parser
|
2015-12-08 09:15:48 +09:00
|
|
|
Telemetry:
|
|
|
|
ptl:
|
2023-09-21 14:22:53 +10:00
|
|
|
name: Erno Kuvaja
|
|
|
|
irc: jokke_
|
|
|
|
email: jokke@usr.fi
|
2019-03-21 11:50:24 +01:00
|
|
|
appointed:
|
|
|
|
- train
|
2015-12-08 09:15:48 +09:00
|
|
|
irc-channel: openstack-telemetry
|
|
|
|
service: Telemetry service
|
|
|
|
mission: >
|
|
|
|
To reliably collect measurements of the utilization of the physical and
|
|
|
|
virtual resources comprising deployed clouds, persist these data for
|
|
|
|
subsequent retrieval and analysis, and trigger actions when defined
|
|
|
|
criteria are met.
|
|
|
|
url: https://wiki.openstack.org/wiki/Telemetry
|
|
|
|
deliverables:
|
|
|
|
aodh:
|
|
|
|
repos:
|
|
|
|
- openstack/aodh
|
|
|
|
ceilometer:
|
|
|
|
repos:
|
|
|
|
- openstack/ceilometer
|
2016-02-26 02:50:08 -05:00
|
|
|
telemetry-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-12-08 09:15:48 +09:00
|
|
|
repos:
|
2016-02-26 02:50:08 -05:00
|
|
|
- openstack/telemetry-specs
|
2015-12-08 09:15:48 +09:00
|
|
|
ceilometermiddleware:
|
|
|
|
repos:
|
|
|
|
- openstack/ceilometermiddleware
|
2015-12-18 17:04:16 -05:00
|
|
|
python-aodhclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-aodhclient
|
2023-09-13 14:22:17 +02:00
|
|
|
python-observabilityclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-observabilityclient
|
2017-12-09 21:07:56 +05:30
|
|
|
telemetry-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/telemetry-tempest-plugin
|
2015-09-02 10:47:14 -05:00
|
|
|
trove:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2024-09-19 23:17:25 +09:00
|
|
|
name: wu chunyang
|
2024-05-27 14:17:09 +08:00
|
|
|
irc: wuchunyang
|
|
|
|
email: wchy1001@gmail.com
|
2018-08-13 13:45:06 -04:00
|
|
|
appointed:
|
|
|
|
- stein
|
2022-03-10 11:25:57 -06:00
|
|
|
- zed
|
2023-12-05 11:01:43 +08:00
|
|
|
- '2024.1'
|
2024-05-27 14:17:09 +08:00
|
|
|
- '2024.2'
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-trove
|
|
|
|
service: Database service
|
2015-06-12 06:21:19 +00:00
|
|
|
mission: >
|
2015-07-28 12:11:57 +02:00
|
|
|
To provide scalable and reliable Cloud Database as a Service functionality
|
|
|
|
for both relational and non-relational database engines, and to continue to
|
|
|
|
improve its fully-featured and extensible open source framework.
|
|
|
|
url: https://wiki.openstack.org/wiki/Trove
|
2015-07-16 15:57:12 +02:00
|
|
|
deliverables:
|
2015-07-28 12:11:57 +02:00
|
|
|
python-troveclient:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/python-troveclient
|
|
|
|
trove:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/trove
|
2015-12-11 10:52:32 -07:00
|
|
|
trove-dashboard:
|
|
|
|
repos:
|
|
|
|
- openstack/trove-dashboard
|
2015-07-28 12:11:57 +02:00
|
|
|
trove-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/trove-specs
|
2017-12-11 20:44:01 +05:30
|
|
|
trove-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/trove-tempest-plugin
|
Venus official project status
Venus is a unified log management module, according to the
keyrequirements of OpenStack in log storage, retrieval, analysis
and so on. This module can provide a one-stop solution to log
collection, cleaning, indexing, analysis, alarm, visualization,
report generation etc..
Venus can help operator or maintainer to improve the level of
platform management and the goal of the project is to make log
retrieval no longer difficult, to make log typical error warnings
become a reality.
Venus project has been contributed to OpenStack Community in November
2020. At that time, functions such as collection, retrieval, and error
statistics of platform log and service log were developed. In order to
make the venus project easier and more practical, the functions of venus
have been enriched and improved in the past six monthsas follows:
1. Improved and standardized the completed code, added some test codes[1].
2. Venus-dashboard plug-in [2] based on Horizon is developed, which can
provide operators with a log retrieval page, and the function is
still being improved.
3. To make the deployment more versatile, besides deploy with devstack,
kolla-based venus deployment script is submit[3].
4. The typical log errors of the openstack platform are being collected,
and the alarm function is also under development.
The offline promotion of venus is also continuing. Venus is shared on
Hackathon in April 2021, and attracted the interest of companies such as
China Unicom, who express their requirements of Venus. At the upcoming
OpenInfra Days China, Venus will to be promote so that more users and
developers can known and use.
In the future, in addition to improving existing functions, algorithms
will be added to log fault location to enable Venus to play a greater
role in intelligent operation and maintenance.
[1] https://review.opendev.org/q/project:inspur/venus
[2] https://review.opendev.org/q/project:inspur/venus-dashboard
[3] https://review.opendev.org/q/topic:%22venus%22+(status:open%20OR%20status:merged)
[4] ML discussion: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html
Change-Id: Id82df49a776537bcd38b49a7f95fc0c10bc2f925
2021-08-17 14:27:09 +08:00
|
|
|
venus:
|
|
|
|
ptl:
|
2023-11-01 02:33:02 +00:00
|
|
|
name: Eric Zhang
|
2024-03-22 09:00:21 +09:00
|
|
|
irc: No nick supplied
|
2023-11-01 02:33:02 +00:00
|
|
|
email: zhanglf01@inspur.com
|
2022-02-27 18:55:50 -06:00
|
|
|
appointed:
|
|
|
|
- zed
|
2023-03-10 18:14:20 -06:00
|
|
|
- '2023.1'
|
2023-11-01 02:33:02 +00:00
|
|
|
- '2024.1'
|
Venus official project status
Venus is a unified log management module, according to the
keyrequirements of OpenStack in log storage, retrieval, analysis
and so on. This module can provide a one-stop solution to log
collection, cleaning, indexing, analysis, alarm, visualization,
report generation etc..
Venus can help operator or maintainer to improve the level of
platform management and the goal of the project is to make log
retrieval no longer difficult, to make log typical error warnings
become a reality.
Venus project has been contributed to OpenStack Community in November
2020. At that time, functions such as collection, retrieval, and error
statistics of platform log and service log were developed. In order to
make the venus project easier and more practical, the functions of venus
have been enriched and improved in the past six monthsas follows:
1. Improved and standardized the completed code, added some test codes[1].
2. Venus-dashboard plug-in [2] based on Horizon is developed, which can
provide operators with a log retrieval page, and the function is
still being improved.
3. To make the deployment more versatile, besides deploy with devstack,
kolla-based venus deployment script is submit[3].
4. The typical log errors of the openstack platform are being collected,
and the alarm function is also under development.
The offline promotion of venus is also continuing. Venus is shared on
Hackathon in April 2021, and attracted the interest of companies such as
China Unicom, who express their requirements of Venus. At the upcoming
OpenInfra Days China, Venus will to be promote so that more users and
developers can known and use.
In the future, in addition to improving existing functions, algorithms
will be added to log fault location to enable Venus to play a greater
role in intelligent operation and maintenance.
[1] https://review.opendev.org/q/project:inspur/venus
[2] https://review.opendev.org/q/project:inspur/venus-dashboard
[3] https://review.opendev.org/q/topic:%22venus%22+(status:open%20OR%20status:merged)
[4] ML discussion: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html
Change-Id: Id82df49a776537bcd38b49a7f95fc0c10bc2f925
2021-08-17 14:27:09 +08:00
|
|
|
irc-channel: openstack-venus
|
|
|
|
service: Log management service
|
|
|
|
mission: >
|
|
|
|
Venus is a unified log management module, according to the key
|
|
|
|
requirements of OpenStack in log storage, retrieval, analysis
|
|
|
|
and so on. This module can provide a one-stop solution to log
|
|
|
|
collection, cleaning, indexing, analysis, alarm, visualization,
|
|
|
|
report generation etc..
|
|
|
|
url: https://wiki.openstack.org/wiki/Venus
|
|
|
|
deliverables:
|
|
|
|
venus:
|
|
|
|
repos:
|
2021-10-25 08:45:22 +00:00
|
|
|
- openstack/venus
|
Venus official project status
Venus is a unified log management module, according to the
keyrequirements of OpenStack in log storage, retrieval, analysis
and so on. This module can provide a one-stop solution to log
collection, cleaning, indexing, analysis, alarm, visualization,
report generation etc..
Venus can help operator or maintainer to improve the level of
platform management and the goal of the project is to make log
retrieval no longer difficult, to make log typical error warnings
become a reality.
Venus project has been contributed to OpenStack Community in November
2020. At that time, functions such as collection, retrieval, and error
statistics of platform log and service log were developed. In order to
make the venus project easier and more practical, the functions of venus
have been enriched and improved in the past six monthsas follows:
1. Improved and standardized the completed code, added some test codes[1].
2. Venus-dashboard plug-in [2] based on Horizon is developed, which can
provide operators with a log retrieval page, and the function is
still being improved.
3. To make the deployment more versatile, besides deploy with devstack,
kolla-based venus deployment script is submit[3].
4. The typical log errors of the openstack platform are being collected,
and the alarm function is also under development.
The offline promotion of venus is also continuing. Venus is shared on
Hackathon in April 2021, and attracted the interest of companies such as
China Unicom, who express their requirements of Venus. At the upcoming
OpenInfra Days China, Venus will to be promote so that more users and
developers can known and use.
In the future, in addition to improving existing functions, algorithms
will be added to log fault location to enable Venus to play a greater
role in intelligent operation and maintenance.
[1] https://review.opendev.org/q/project:inspur/venus
[2] https://review.opendev.org/q/project:inspur/venus-dashboard
[3] https://review.opendev.org/q/topic:%22venus%22+(status:open%20OR%20status:merged)
[4] ML discussion: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html
Change-Id: Id82df49a776537bcd38b49a7f95fc0c10bc2f925
2021-08-17 14:27:09 +08:00
|
|
|
venus-specs:
|
|
|
|
release-management: none
|
|
|
|
repos:
|
2021-10-25 08:45:22 +00:00
|
|
|
- openstack/venus-specs
|
Venus official project status
Venus is a unified log management module, according to the
keyrequirements of OpenStack in log storage, retrieval, analysis
and so on. This module can provide a one-stop solution to log
collection, cleaning, indexing, analysis, alarm, visualization,
report generation etc..
Venus can help operator or maintainer to improve the level of
platform management and the goal of the project is to make log
retrieval no longer difficult, to make log typical error warnings
become a reality.
Venus project has been contributed to OpenStack Community in November
2020. At that time, functions such as collection, retrieval, and error
statistics of platform log and service log were developed. In order to
make the venus project easier and more practical, the functions of venus
have been enriched and improved in the past six monthsas follows:
1. Improved and standardized the completed code, added some test codes[1].
2. Venus-dashboard plug-in [2] based on Horizon is developed, which can
provide operators with a log retrieval page, and the function is
still being improved.
3. To make the deployment more versatile, besides deploy with devstack,
kolla-based venus deployment script is submit[3].
4. The typical log errors of the openstack platform are being collected,
and the alarm function is also under development.
The offline promotion of venus is also continuing. Venus is shared on
Hackathon in April 2021, and attracted the interest of companies such as
China Unicom, who express their requirements of Venus. At the upcoming
OpenInfra Days China, Venus will to be promote so that more users and
developers can known and use.
In the future, in addition to improving existing functions, algorithms
will be added to log fault location to enable Venus to play a greater
role in intelligent operation and maintenance.
[1] https://review.opendev.org/q/project:inspur/venus
[2] https://review.opendev.org/q/project:inspur/venus-dashboard
[3] https://review.opendev.org/q/topic:%22venus%22+(status:open%20OR%20status:merged)
[4] ML discussion: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html
Change-Id: Id82df49a776537bcd38b49a7f95fc0c10bc2f925
2021-08-17 14:27:09 +08:00
|
|
|
venus-dashboard:
|
|
|
|
repos:
|
2021-10-25 08:45:22 +00:00
|
|
|
- openstack/venus-dashboard
|
Venus official project status
Venus is a unified log management module, according to the
keyrequirements of OpenStack in log storage, retrieval, analysis
and so on. This module can provide a one-stop solution to log
collection, cleaning, indexing, analysis, alarm, visualization,
report generation etc..
Venus can help operator or maintainer to improve the level of
platform management and the goal of the project is to make log
retrieval no longer difficult, to make log typical error warnings
become a reality.
Venus project has been contributed to OpenStack Community in November
2020. At that time, functions such as collection, retrieval, and error
statistics of platform log and service log were developed. In order to
make the venus project easier and more practical, the functions of venus
have been enriched and improved in the past six monthsas follows:
1. Improved and standardized the completed code, added some test codes[1].
2. Venus-dashboard plug-in [2] based on Horizon is developed, which can
provide operators with a log retrieval page, and the function is
still being improved.
3. To make the deployment more versatile, besides deploy with devstack,
kolla-based venus deployment script is submit[3].
4. The typical log errors of the openstack platform are being collected,
and the alarm function is also under development.
The offline promotion of venus is also continuing. Venus is shared on
Hackathon in April 2021, and attracted the interest of companies such as
China Unicom, who express their requirements of Venus. At the upcoming
OpenInfra Days China, Venus will to be promote so that more users and
developers can known and use.
In the future, in addition to improving existing functions, algorithms
will be added to log fault location to enable Venus to play a greater
role in intelligent operation and maintenance.
[1] https://review.opendev.org/q/project:inspur/venus
[2] https://review.opendev.org/q/project:inspur/venus-dashboard
[3] https://review.opendev.org/q/topic:%22venus%22+(status:open%20OR%20status:merged)
[4] ML discussion: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html
Change-Id: Id82df49a776537bcd38b49a7f95fc0c10bc2f925
2021-08-17 14:27:09 +08:00
|
|
|
python-venusclient:
|
|
|
|
repos:
|
2021-10-25 08:45:22 +00:00
|
|
|
- openstack/python-venusclient
|
Venus official project status
Venus is a unified log management module, according to the
keyrequirements of OpenStack in log storage, retrieval, analysis
and so on. This module can provide a one-stop solution to log
collection, cleaning, indexing, analysis, alarm, visualization,
report generation etc..
Venus can help operator or maintainer to improve the level of
platform management and the goal of the project is to make log
retrieval no longer difficult, to make log typical error warnings
become a reality.
Venus project has been contributed to OpenStack Community in November
2020. At that time, functions such as collection, retrieval, and error
statistics of platform log and service log were developed. In order to
make the venus project easier and more practical, the functions of venus
have been enriched and improved in the past six monthsas follows:
1. Improved and standardized the completed code, added some test codes[1].
2. Venus-dashboard plug-in [2] based on Horizon is developed, which can
provide operators with a log retrieval page, and the function is
still being improved.
3. To make the deployment more versatile, besides deploy with devstack,
kolla-based venus deployment script is submit[3].
4. The typical log errors of the openstack platform are being collected,
and the alarm function is also under development.
The offline promotion of venus is also continuing. Venus is shared on
Hackathon in April 2021, and attracted the interest of companies such as
China Unicom, who express their requirements of Venus. At the upcoming
OpenInfra Days China, Venus will to be promote so that more users and
developers can known and use.
In the future, in addition to improving existing functions, algorithms
will be added to log fault location to enable Venus to play a greater
role in intelligent operation and maintenance.
[1] https://review.opendev.org/q/project:inspur/venus
[2] https://review.opendev.org/q/project:inspur/venus-dashboard
[3] https://review.opendev.org/q/topic:%22venus%22+(status:open%20OR%20status:merged)
[4] ML discussion: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html
Change-Id: Id82df49a776537bcd38b49a7f95fc0c10bc2f925
2021-08-17 14:27:09 +08:00
|
|
|
venus-tempest-plugin:
|
|
|
|
repos:
|
2021-10-25 08:45:22 +00:00
|
|
|
- openstack/venus-tempest-plugin
|
Add project Vitrage to OpenStack big-tent
This is a request to add the Vitrage project[1] to the Big Tent.
Vitrage aims to be the OpenStack RCA (Root Cause Analysis) Engine for
organizing, analyzing and expanding OpenStack alarms & events, yielding
insights regarding the root cause of problems and deducing their
existence before they are directly detected.
Vitrage provides an holistic and complete view of the cloud, with
a clear visualization of the physical-to-virtual (and soon the
applicative) layers. Vitrage receives information from various
OpenStack data sources (Aodh, Nova, Cinder and Neutron), as well as
from external data sources (like Nagios, and in the future Zabbix)
and combines it in a topology graph database. This topology is used to
deduce that certain problems in the physical layer should cause problems
in the virtual layer; to trigger corresponding alarms; to modify
(currently only in Vitrage) resources states; and to reflect the causal
relationship of the alarms in the cloud.
Vitrage development started right after Mitaka Summit, where we had
meetings with other OpenStack projects (like Ceilometer/Aodh), presented
to them Vitrage ideas, verified that there was no overlap between our
projects, and got their blessing.
Vitrage follows OpenStack guidelines and “four opens” from day one.
Vitrage code and specifications are entirely open source. The project
is fully licensed under the Apache 2.0 license, and the project uses
public code reviews in gerrit. The code can be found here[2][3][4][5].
Vitrage gate runs docs, pep8, python27, python34 and tempest tests.
The #openstack-vitrage IRC channel on freenode is logged [6]. Vitrage
team holds weekly IRC meetings on OpenStack official meeting channel,
which is logged as well[7]. Openstack-dev mailing list is used to
publicly discuss Vitrage related issues. Vitrage wiki page[1] consists
of high-level and low-level design documents, presentations and demos.
OpenStack launchpad[8] is used to manage blueprints and bugs. We started
the process of Newton design discussions right after Austin Summit. We
perform these discussions in the IRC meetings, in the mailing list, and
in etherpads[9][10] created for this purpose.
Vitrage was presented in two sessions in Austin Summit[11][12], as well
as in demos in the marketplace. In addition, Aodh-Vitrage integration
was discussed in Aodh design session[13][14]. One Aodh fix was already
made to enable this integration[15]. Other integrations, with Monasca
and Congress, were discussed during the summit, and we are continuing
such discussions in the mailing list.
In addition to OpenStack community, Vitrage is also involved in OPNFV
projects. Vitrage is considered a reference implementation for OPNFV
Doctor Inspector component [16][17], as well as for OPNFV PinPoint[18]
project. Vitrage is about to be presented, together with OPNFV Doctor
and with OpenStack Congress, in OPNFV Summit in Berlin[19]. Vitrage demo
is going to be presented in Doctor POC booth.
For the past six months, since Vitrage development began, we have been
encouraging interaction and cooperation with other projects and
contributors in OpenStack. To date, a few companies and projects have
worked with us on formulating requirements as well as contributed code.
We are working hard at involving additional players in Vitrage, to help
drive its feature set so that it supports a wide range of needs from the
community.
To summarize, Vitrage fills a real need in OpenStack. Vitrage holistic
and complete view that includes all of the layers of the cloud, as well
as its alarm analysis and correlation, is currently lacking in
OpenStack. As Vitrage enhances OpenStack capabilities, raises a lot of
interest in the community, and is developed by OpenStack standards, we
believe it belongs to the big tent.
[1] https://wiki.openstack.org/wiki/Vitrage
[2] https://github.com/openstack/vitrage
[3] https://github.com/openstack/vitrage-dashboard
[4] https://github.com/openstack/python-vitrageclient
[5] https://github.com/openstack/vitrage-specs
[6] http://eavesdrop.openstack.org/irclogs/%23openstack-vitrage/
[7] http://eavesdrop.openstack.org/meetings/vitrage/
[8] https://launchpad.net/vitrage
[9] https://etherpad.openstack.org/p/vitrage-newton-planning
[10] https://etherpad.openstack.org/p/vitrage-overlapping-templates-support-design
[11] https://www.youtube.com/watch?v=9Qw5coTLgMo
[12] https://www.youtube.com/watch?v=ey68KNKXc5c
[13] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9203
[14] https://etherpad.openstack.org/p/newton-telemetry-vitrage
[15] https://review.openstack.org/#/c/314683/
[16] https://wiki.opnfv.org/display/doctor/Doctor+Home
[17] https://jira.opnfv.org/browse/DOCTOR-57
[18] https://wiki.opnfv.org/display/pinpoint/Pinpoint+Home
[19] https://www.eventscribe.com/2016/OPNFV/aaSearchByDay.asp?h=Full%20Schedule&BCFO=P|G see “Failure Inspection in Doctor using Vitrage and Congress”
Change-Id: I47f7302da3d35a4620fc910c38384b4b4c71de7d
2016-05-24 08:03:53 +00:00
|
|
|
vitrage:
|
|
|
|
ptl:
|
2023-05-03 13:12:13 +02:00
|
|
|
name: Dmitriy Rabotyagov
|
|
|
|
irc: noonedeadpunk
|
|
|
|
email: noonedeadpunk@gmail.com
|
|
|
|
appointed:
|
|
|
|
- '2023.2'
|
Add project Vitrage to OpenStack big-tent
This is a request to add the Vitrage project[1] to the Big Tent.
Vitrage aims to be the OpenStack RCA (Root Cause Analysis) Engine for
organizing, analyzing and expanding OpenStack alarms & events, yielding
insights regarding the root cause of problems and deducing their
existence before they are directly detected.
Vitrage provides an holistic and complete view of the cloud, with
a clear visualization of the physical-to-virtual (and soon the
applicative) layers. Vitrage receives information from various
OpenStack data sources (Aodh, Nova, Cinder and Neutron), as well as
from external data sources (like Nagios, and in the future Zabbix)
and combines it in a topology graph database. This topology is used to
deduce that certain problems in the physical layer should cause problems
in the virtual layer; to trigger corresponding alarms; to modify
(currently only in Vitrage) resources states; and to reflect the causal
relationship of the alarms in the cloud.
Vitrage development started right after Mitaka Summit, where we had
meetings with other OpenStack projects (like Ceilometer/Aodh), presented
to them Vitrage ideas, verified that there was no overlap between our
projects, and got their blessing.
Vitrage follows OpenStack guidelines and “four opens” from day one.
Vitrage code and specifications are entirely open source. The project
is fully licensed under the Apache 2.0 license, and the project uses
public code reviews in gerrit. The code can be found here[2][3][4][5].
Vitrage gate runs docs, pep8, python27, python34 and tempest tests.
The #openstack-vitrage IRC channel on freenode is logged [6]. Vitrage
team holds weekly IRC meetings on OpenStack official meeting channel,
which is logged as well[7]. Openstack-dev mailing list is used to
publicly discuss Vitrage related issues. Vitrage wiki page[1] consists
of high-level and low-level design documents, presentations and demos.
OpenStack launchpad[8] is used to manage blueprints and bugs. We started
the process of Newton design discussions right after Austin Summit. We
perform these discussions in the IRC meetings, in the mailing list, and
in etherpads[9][10] created for this purpose.
Vitrage was presented in two sessions in Austin Summit[11][12], as well
as in demos in the marketplace. In addition, Aodh-Vitrage integration
was discussed in Aodh design session[13][14]. One Aodh fix was already
made to enable this integration[15]. Other integrations, with Monasca
and Congress, were discussed during the summit, and we are continuing
such discussions in the mailing list.
In addition to OpenStack community, Vitrage is also involved in OPNFV
projects. Vitrage is considered a reference implementation for OPNFV
Doctor Inspector component [16][17], as well as for OPNFV PinPoint[18]
project. Vitrage is about to be presented, together with OPNFV Doctor
and with OpenStack Congress, in OPNFV Summit in Berlin[19]. Vitrage demo
is going to be presented in Doctor POC booth.
For the past six months, since Vitrage development began, we have been
encouraging interaction and cooperation with other projects and
contributors in OpenStack. To date, a few companies and projects have
worked with us on formulating requirements as well as contributed code.
We are working hard at involving additional players in Vitrage, to help
drive its feature set so that it supports a wide range of needs from the
community.
To summarize, Vitrage fills a real need in OpenStack. Vitrage holistic
and complete view that includes all of the layers of the cloud, as well
as its alarm analysis and correlation, is currently lacking in
OpenStack. As Vitrage enhances OpenStack capabilities, raises a lot of
interest in the community, and is developed by OpenStack standards, we
believe it belongs to the big tent.
[1] https://wiki.openstack.org/wiki/Vitrage
[2] https://github.com/openstack/vitrage
[3] https://github.com/openstack/vitrage-dashboard
[4] https://github.com/openstack/python-vitrageclient
[5] https://github.com/openstack/vitrage-specs
[6] http://eavesdrop.openstack.org/irclogs/%23openstack-vitrage/
[7] http://eavesdrop.openstack.org/meetings/vitrage/
[8] https://launchpad.net/vitrage
[9] https://etherpad.openstack.org/p/vitrage-newton-planning
[10] https://etherpad.openstack.org/p/vitrage-overlapping-templates-support-design
[11] https://www.youtube.com/watch?v=9Qw5coTLgMo
[12] https://www.youtube.com/watch?v=ey68KNKXc5c
[13] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9203
[14] https://etherpad.openstack.org/p/newton-telemetry-vitrage
[15] https://review.openstack.org/#/c/314683/
[16] https://wiki.opnfv.org/display/doctor/Doctor+Home
[17] https://jira.opnfv.org/browse/DOCTOR-57
[18] https://wiki.opnfv.org/display/pinpoint/Pinpoint+Home
[19] https://www.eventscribe.com/2016/OPNFV/aaSearchByDay.asp?h=Full%20Schedule&BCFO=P|G see “Failure Inspection in Doctor using Vitrage and Congress”
Change-Id: I47f7302da3d35a4620fc910c38384b4b4c71de7d
2016-05-24 08:03:53 +00:00
|
|
|
irc-channel: openstack-vitrage
|
|
|
|
service: RCA (Root Cause Analysis) service
|
|
|
|
mission: >
|
|
|
|
To organize, analyze and visualize OpenStack alarms & events, yield
|
|
|
|
insights regarding the root cause of problems and deduce their existence
|
|
|
|
before they are directly detected.
|
|
|
|
url: https://wiki.openstack.org/wiki/Vitrage
|
|
|
|
deliverables:
|
|
|
|
vitrage:
|
|
|
|
repos:
|
|
|
|
- openstack/vitrage
|
|
|
|
vitrage-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
Add project Vitrage to OpenStack big-tent
This is a request to add the Vitrage project[1] to the Big Tent.
Vitrage aims to be the OpenStack RCA (Root Cause Analysis) Engine for
organizing, analyzing and expanding OpenStack alarms & events, yielding
insights regarding the root cause of problems and deducing their
existence before they are directly detected.
Vitrage provides an holistic and complete view of the cloud, with
a clear visualization of the physical-to-virtual (and soon the
applicative) layers. Vitrage receives information from various
OpenStack data sources (Aodh, Nova, Cinder and Neutron), as well as
from external data sources (like Nagios, and in the future Zabbix)
and combines it in a topology graph database. This topology is used to
deduce that certain problems in the physical layer should cause problems
in the virtual layer; to trigger corresponding alarms; to modify
(currently only in Vitrage) resources states; and to reflect the causal
relationship of the alarms in the cloud.
Vitrage development started right after Mitaka Summit, where we had
meetings with other OpenStack projects (like Ceilometer/Aodh), presented
to them Vitrage ideas, verified that there was no overlap between our
projects, and got their blessing.
Vitrage follows OpenStack guidelines and “four opens” from day one.
Vitrage code and specifications are entirely open source. The project
is fully licensed under the Apache 2.0 license, and the project uses
public code reviews in gerrit. The code can be found here[2][3][4][5].
Vitrage gate runs docs, pep8, python27, python34 and tempest tests.
The #openstack-vitrage IRC channel on freenode is logged [6]. Vitrage
team holds weekly IRC meetings on OpenStack official meeting channel,
which is logged as well[7]. Openstack-dev mailing list is used to
publicly discuss Vitrage related issues. Vitrage wiki page[1] consists
of high-level and low-level design documents, presentations and demos.
OpenStack launchpad[8] is used to manage blueprints and bugs. We started
the process of Newton design discussions right after Austin Summit. We
perform these discussions in the IRC meetings, in the mailing list, and
in etherpads[9][10] created for this purpose.
Vitrage was presented in two sessions in Austin Summit[11][12], as well
as in demos in the marketplace. In addition, Aodh-Vitrage integration
was discussed in Aodh design session[13][14]. One Aodh fix was already
made to enable this integration[15]. Other integrations, with Monasca
and Congress, were discussed during the summit, and we are continuing
such discussions in the mailing list.
In addition to OpenStack community, Vitrage is also involved in OPNFV
projects. Vitrage is considered a reference implementation for OPNFV
Doctor Inspector component [16][17], as well as for OPNFV PinPoint[18]
project. Vitrage is about to be presented, together with OPNFV Doctor
and with OpenStack Congress, in OPNFV Summit in Berlin[19]. Vitrage demo
is going to be presented in Doctor POC booth.
For the past six months, since Vitrage development began, we have been
encouraging interaction and cooperation with other projects and
contributors in OpenStack. To date, a few companies and projects have
worked with us on formulating requirements as well as contributed code.
We are working hard at involving additional players in Vitrage, to help
drive its feature set so that it supports a wide range of needs from the
community.
To summarize, Vitrage fills a real need in OpenStack. Vitrage holistic
and complete view that includes all of the layers of the cloud, as well
as its alarm analysis and correlation, is currently lacking in
OpenStack. As Vitrage enhances OpenStack capabilities, raises a lot of
interest in the community, and is developed by OpenStack standards, we
believe it belongs to the big tent.
[1] https://wiki.openstack.org/wiki/Vitrage
[2] https://github.com/openstack/vitrage
[3] https://github.com/openstack/vitrage-dashboard
[4] https://github.com/openstack/python-vitrageclient
[5] https://github.com/openstack/vitrage-specs
[6] http://eavesdrop.openstack.org/irclogs/%23openstack-vitrage/
[7] http://eavesdrop.openstack.org/meetings/vitrage/
[8] https://launchpad.net/vitrage
[9] https://etherpad.openstack.org/p/vitrage-newton-planning
[10] https://etherpad.openstack.org/p/vitrage-overlapping-templates-support-design
[11] https://www.youtube.com/watch?v=9Qw5coTLgMo
[12] https://www.youtube.com/watch?v=ey68KNKXc5c
[13] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9203
[14] https://etherpad.openstack.org/p/newton-telemetry-vitrage
[15] https://review.openstack.org/#/c/314683/
[16] https://wiki.opnfv.org/display/doctor/Doctor+Home
[17] https://jira.opnfv.org/browse/DOCTOR-57
[18] https://wiki.opnfv.org/display/pinpoint/Pinpoint+Home
[19] https://www.eventscribe.com/2016/OPNFV/aaSearchByDay.asp?h=Full%20Schedule&BCFO=P|G see “Failure Inspection in Doctor using Vitrage and Congress”
Change-Id: I47f7302da3d35a4620fc910c38384b4b4c71de7d
2016-05-24 08:03:53 +00:00
|
|
|
repos:
|
|
|
|
- openstack/vitrage-specs
|
2017-12-09 17:24:38 +05:30
|
|
|
vitrage-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/vitrage-tempest-plugin
|
Add project Vitrage to OpenStack big-tent
This is a request to add the Vitrage project[1] to the Big Tent.
Vitrage aims to be the OpenStack RCA (Root Cause Analysis) Engine for
organizing, analyzing and expanding OpenStack alarms & events, yielding
insights regarding the root cause of problems and deducing their
existence before they are directly detected.
Vitrage provides an holistic and complete view of the cloud, with
a clear visualization of the physical-to-virtual (and soon the
applicative) layers. Vitrage receives information from various
OpenStack data sources (Aodh, Nova, Cinder and Neutron), as well as
from external data sources (like Nagios, and in the future Zabbix)
and combines it in a topology graph database. This topology is used to
deduce that certain problems in the physical layer should cause problems
in the virtual layer; to trigger corresponding alarms; to modify
(currently only in Vitrage) resources states; and to reflect the causal
relationship of the alarms in the cloud.
Vitrage development started right after Mitaka Summit, where we had
meetings with other OpenStack projects (like Ceilometer/Aodh), presented
to them Vitrage ideas, verified that there was no overlap between our
projects, and got their blessing.
Vitrage follows OpenStack guidelines and “four opens” from day one.
Vitrage code and specifications are entirely open source. The project
is fully licensed under the Apache 2.0 license, and the project uses
public code reviews in gerrit. The code can be found here[2][3][4][5].
Vitrage gate runs docs, pep8, python27, python34 and tempest tests.
The #openstack-vitrage IRC channel on freenode is logged [6]. Vitrage
team holds weekly IRC meetings on OpenStack official meeting channel,
which is logged as well[7]. Openstack-dev mailing list is used to
publicly discuss Vitrage related issues. Vitrage wiki page[1] consists
of high-level and low-level design documents, presentations and demos.
OpenStack launchpad[8] is used to manage blueprints and bugs. We started
the process of Newton design discussions right after Austin Summit. We
perform these discussions in the IRC meetings, in the mailing list, and
in etherpads[9][10] created for this purpose.
Vitrage was presented in two sessions in Austin Summit[11][12], as well
as in demos in the marketplace. In addition, Aodh-Vitrage integration
was discussed in Aodh design session[13][14]. One Aodh fix was already
made to enable this integration[15]. Other integrations, with Monasca
and Congress, were discussed during the summit, and we are continuing
such discussions in the mailing list.
In addition to OpenStack community, Vitrage is also involved in OPNFV
projects. Vitrage is considered a reference implementation for OPNFV
Doctor Inspector component [16][17], as well as for OPNFV PinPoint[18]
project. Vitrage is about to be presented, together with OPNFV Doctor
and with OpenStack Congress, in OPNFV Summit in Berlin[19]. Vitrage demo
is going to be presented in Doctor POC booth.
For the past six months, since Vitrage development began, we have been
encouraging interaction and cooperation with other projects and
contributors in OpenStack. To date, a few companies and projects have
worked with us on formulating requirements as well as contributed code.
We are working hard at involving additional players in Vitrage, to help
drive its feature set so that it supports a wide range of needs from the
community.
To summarize, Vitrage fills a real need in OpenStack. Vitrage holistic
and complete view that includes all of the layers of the cloud, as well
as its alarm analysis and correlation, is currently lacking in
OpenStack. As Vitrage enhances OpenStack capabilities, raises a lot of
interest in the community, and is developed by OpenStack standards, we
believe it belongs to the big tent.
[1] https://wiki.openstack.org/wiki/Vitrage
[2] https://github.com/openstack/vitrage
[3] https://github.com/openstack/vitrage-dashboard
[4] https://github.com/openstack/python-vitrageclient
[5] https://github.com/openstack/vitrage-specs
[6] http://eavesdrop.openstack.org/irclogs/%23openstack-vitrage/
[7] http://eavesdrop.openstack.org/meetings/vitrage/
[8] https://launchpad.net/vitrage
[9] https://etherpad.openstack.org/p/vitrage-newton-planning
[10] https://etherpad.openstack.org/p/vitrage-overlapping-templates-support-design
[11] https://www.youtube.com/watch?v=9Qw5coTLgMo
[12] https://www.youtube.com/watch?v=ey68KNKXc5c
[13] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9203
[14] https://etherpad.openstack.org/p/newton-telemetry-vitrage
[15] https://review.openstack.org/#/c/314683/
[16] https://wiki.opnfv.org/display/doctor/Doctor+Home
[17] https://jira.opnfv.org/browse/DOCTOR-57
[18] https://wiki.opnfv.org/display/pinpoint/Pinpoint+Home
[19] https://www.eventscribe.com/2016/OPNFV/aaSearchByDay.asp?h=Full%20Schedule&BCFO=P|G see “Failure Inspection in Doctor using Vitrage and Congress”
Change-Id: I47f7302da3d35a4620fc910c38384b4b4c71de7d
2016-05-24 08:03:53 +00:00
|
|
|
python-vitrageclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-vitrageclient
|
|
|
|
vitrage-dashboard:
|
|
|
|
repos:
|
|
|
|
- openstack/vitrage-dashboard
|
2020-02-21 19:22:56 +01:00
|
|
|
xstatic-dagre:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-dagre
|
|
|
|
xstatic-dagre-d3:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-dagre-d3
|
|
|
|
xstatic-graphlib:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-graphlib
|
|
|
|
xstatic-lodash:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-lodash
|
|
|
|
xstatic-moment:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-moment
|
|
|
|
xstatic-moment-timezone:
|
|
|
|
repos:
|
|
|
|
- openstack/xstatic-moment-timezone
|
2016-05-25 14:11:05 +02:00
|
|
|
watcher:
|
2025-01-24 19:32:22 +00:00
|
|
|
leadership_type: distributed
|
|
|
|
liaisons:
|
|
|
|
release:
|
|
|
|
- name: Sean Mooney
|
|
|
|
irc: sean-k-mooney
|
|
|
|
email: smooney@redhat.com
|
|
|
|
tact-sig:
|
|
|
|
- name: Marios Andreou
|
|
|
|
irc: marios
|
|
|
|
email: marios@redhat.com
|
|
|
|
- name: Chandan Kumar
|
|
|
|
irc: chandankumar
|
|
|
|
email: chkumar@redhat.com
|
|
|
|
security:
|
|
|
|
- name: Dan Smith
|
|
|
|
irc: dansmith
|
|
|
|
email: danms@danplanet.com
|
|
|
|
tc-liaison:
|
|
|
|
- name: Ghanshyam Mann
|
|
|
|
irc: gmann
|
|
|
|
email: gmann@ghanshyammann.com
|
2016-05-25 14:11:05 +02:00
|
|
|
irc-channel: openstack-watcher
|
|
|
|
service: Infrastructure Optimization service
|
|
|
|
mission: >
|
|
|
|
Watcher's goal is to provide a flexible and scalable resource optimization
|
|
|
|
service for multi-tenant OpenStack-based clouds.
|
|
|
|
url: https://wiki.openstack.org/wiki/Watcher
|
|
|
|
deliverables:
|
|
|
|
watcher:
|
|
|
|
repos:
|
|
|
|
- openstack/watcher
|
|
|
|
watcher-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2016-05-25 14:11:05 +02:00
|
|
|
repos:
|
|
|
|
- openstack/watcher-specs
|
2017-08-01 13:51:16 +03:00
|
|
|
watcher-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/watcher-tempest-plugin
|
2016-05-25 14:11:05 +02:00
|
|
|
python-watcherclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-watcherclient
|
|
|
|
watcher-dashboard:
|
|
|
|
repos:
|
|
|
|
- openstack/watcher-dashboard
|
2015-09-02 10:47:14 -05:00
|
|
|
zaqar:
|
2015-10-13 20:05:21 +00:00
|
|
|
ptl:
|
2022-09-20 23:24:42 +09:00
|
|
|
name: Hao Wang
|
2023-02-16 11:16:15 +11:00
|
|
|
irc: wanghao
|
2022-02-27 19:04:43 -06:00
|
|
|
email: sxmatch1986@gmail.com
|
2019-03-21 11:44:35 +01:00
|
|
|
appointed:
|
|
|
|
- train
|
2020-04-03 12:07:57 -05:00
|
|
|
- victoria
|
2021-03-12 09:18:59 -06:00
|
|
|
- xena
|
2022-02-27 19:04:43 -06:00
|
|
|
- zed
|
2015-07-28 12:11:57 +02:00
|
|
|
irc-channel: openstack-zaqar
|
|
|
|
service: Message service
|
2015-06-14 22:59:03 +02:00
|
|
|
mission: >
|
2015-07-28 12:11:57 +02:00
|
|
|
To produce an OpenStack messaging service that affords a
|
|
|
|
variety of distributed application patterns in an efficient,
|
|
|
|
scalable and highly-available manner, and to create and maintain
|
|
|
|
associated Python libraries and documentation.
|
|
|
|
url: https://wiki.openstack.org/wiki/Zaqar
|
2015-07-16 15:57:12 +02:00
|
|
|
deliverables:
|
2015-07-28 12:11:57 +02:00
|
|
|
python-zaqarclient:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/python-zaqarclient
|
|
|
|
zaqar:
|
2015-07-16 15:57:12 +02:00
|
|
|
repos:
|
2015-07-28 12:11:57 +02:00
|
|
|
- openstack/zaqar
|
|
|
|
zaqar-specs:
|
2018-10-25 12:39:22 +02:00
|
|
|
release-management: none
|
2015-07-28 12:11:57 +02:00
|
|
|
repos:
|
|
|
|
- openstack/zaqar-specs
|
2017-09-06 21:41:01 +05:30
|
|
|
zaqar-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/zaqar-tempest-plugin
|
2015-11-24 06:58:45 +13:00
|
|
|
zaqar-ui:
|
|
|
|
repos:
|
|
|
|
- openstack/zaqar-ui
|
Add project Zun to OpenStack big-tent
Addition of the Zun Project - a collaboration of 22 engineers
from 12 affiliations [1] to bring prevailing container technologies
to Openstack.
Zun was originated from Magnum, but became an independent project
based on the decision made by the community in the OpenStack Austin
design summit [2]. After the split, Zun focuses on providing
container service for OpenStack, while Magnum focus on deployment and
management of Container Orchestration Engines (COEs).
Zun aligns with the OpenStack Mission by providing an
OpenStack-native container service backed by various container
technologies. Zun directly builds on existing OpenStack services,
such as Nova, Neutron, Glance, Keystone, Horizon, and potentially
more.
Zun uses the Apache v2.0 license. All library dependencies allow for
unrestricted distribution and deployment.
Our PTL is chosen by our contributors [3], and holds
weekly IRC meetings in #openstack-meeting and are logged [4][5].
We are also available in #openstack-zun to for daily discussion and
as a way for users to interact with our developers openly. Zun
provides a level and open collaboration playing field because it
doesn't benefit a specific vendor or contributors from a specific
vendor.
Zun uses Gerrit for code reviews [6][7][8], and Launchpad [9][10]
for bugs and blueprints tracking. Our code is automatically tested
by the OpenStack CI infrastructure. Our PTL serves as a contact for
cross-project teams in OpenStack.
We make extensive use of existing software, and have not
deliberately reproduced functionality that already exists in other
ecosystem projects. Zun doesn't compete with COEs, such as Swarm,
Mesos, or Kubernetes, but instead offers a pluggable architecture
for vendors to plugin COEs of their choice. We are open to
integrate prevailing container technologies that help OpenStack
operators offer application containers to their cloud users.
Project direction has been publicly discussed at the Austin and
Barcelona OpenStack Design Summit, and in the weekly IRC meetings.
We also use the openstack-dev ML for project discussion.
Zun is compatible with OpenStack APIs, and uses Keystone middleware
to integrate with OpenStack Identity.
There are numerous active contributors from several affiliations who
have significant contribution to Zun.
The Zun team is happy to participate in any goals specified by the
TC, and meet any policies that the TC requires all projects to meet.
[1] http://stackalytics.com/?project_type=openstack-others&release=all&metric=commits&module=zun
[2] https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107887.html
[4] http://eavesdrop.openstack.org/meetings/zun/2016/
[5] http://eavesdrop.openstack.org/meetings/higgins/2016/
[6] https://review.openstack.org/#/q/project:openstack/zun
[7] https://review.openstack.org/#/q/project:openstack/python-zunclient
[8] https://review.openstack.org/#/q/project:openstack/zun-ui
[9] https://launchpad.net/zun
[10] https://launchpad.net/zun-ui
Change-Id: I9c75496f7b40d3fb78e8c4c92e6bfc2bc161e52f
2016-11-24 16:07:49 -06:00
|
|
|
zun:
|
|
|
|
ptl:
|
2022-10-07 19:55:34 -05:00
|
|
|
name: Hongbin Lu
|
2024-03-22 09:00:21 +09:00
|
|
|
irc: No nick supplied
|
2022-10-07 19:55:34 -05:00
|
|
|
email: hongbin034@gmail.com
|
2021-03-11 13:08:39 -06:00
|
|
|
appointed:
|
|
|
|
- xena
|
2021-09-08 10:08:43 -05:00
|
|
|
- yoga
|
2022-02-27 17:48:23 -06:00
|
|
|
- zed
|
2023-03-10 18:14:20 -06:00
|
|
|
- '2023.1'
|
Add project Zun to OpenStack big-tent
Addition of the Zun Project - a collaboration of 22 engineers
from 12 affiliations [1] to bring prevailing container technologies
to Openstack.
Zun was originated from Magnum, but became an independent project
based on the decision made by the community in the OpenStack Austin
design summit [2]. After the split, Zun focuses on providing
container service for OpenStack, while Magnum focus on deployment and
management of Container Orchestration Engines (COEs).
Zun aligns with the OpenStack Mission by providing an
OpenStack-native container service backed by various container
technologies. Zun directly builds on existing OpenStack services,
such as Nova, Neutron, Glance, Keystone, Horizon, and potentially
more.
Zun uses the Apache v2.0 license. All library dependencies allow for
unrestricted distribution and deployment.
Our PTL is chosen by our contributors [3], and holds
weekly IRC meetings in #openstack-meeting and are logged [4][5].
We are also available in #openstack-zun to for daily discussion and
as a way for users to interact with our developers openly. Zun
provides a level and open collaboration playing field because it
doesn't benefit a specific vendor or contributors from a specific
vendor.
Zun uses Gerrit for code reviews [6][7][8], and Launchpad [9][10]
for bugs and blueprints tracking. Our code is automatically tested
by the OpenStack CI infrastructure. Our PTL serves as a contact for
cross-project teams in OpenStack.
We make extensive use of existing software, and have not
deliberately reproduced functionality that already exists in other
ecosystem projects. Zun doesn't compete with COEs, such as Swarm,
Mesos, or Kubernetes, but instead offers a pluggable architecture
for vendors to plugin COEs of their choice. We are open to
integrate prevailing container technologies that help OpenStack
operators offer application containers to their cloud users.
Project direction has been publicly discussed at the Austin and
Barcelona OpenStack Design Summit, and in the weekly IRC meetings.
We also use the openstack-dev ML for project discussion.
Zun is compatible with OpenStack APIs, and uses Keystone middleware
to integrate with OpenStack Identity.
There are numerous active contributors from several affiliations who
have significant contribution to Zun.
The Zun team is happy to participate in any goals specified by the
TC, and meet any policies that the TC requires all projects to meet.
[1] http://stackalytics.com/?project_type=openstack-others&release=all&metric=commits&module=zun
[2] https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107887.html
[4] http://eavesdrop.openstack.org/meetings/zun/2016/
[5] http://eavesdrop.openstack.org/meetings/higgins/2016/
[6] https://review.openstack.org/#/q/project:openstack/zun
[7] https://review.openstack.org/#/q/project:openstack/python-zunclient
[8] https://review.openstack.org/#/q/project:openstack/zun-ui
[9] https://launchpad.net/zun
[10] https://launchpad.net/zun-ui
Change-Id: I9c75496f7b40d3fb78e8c4c92e6bfc2bc161e52f
2016-11-24 16:07:49 -06:00
|
|
|
irc-channel: openstack-zun
|
|
|
|
service: Containers service
|
|
|
|
mission: >
|
|
|
|
To provide an OpenStack containers service that integrates with various
|
|
|
|
container technologies for managing application containers on OpenStack.
|
|
|
|
url: https://wiki.openstack.org/wiki/Zun
|
|
|
|
deliverables:
|
|
|
|
python-zunclient:
|
|
|
|
repos:
|
|
|
|
- openstack/python-zunclient
|
|
|
|
zun:
|
|
|
|
repos:
|
|
|
|
- openstack/zun
|
2017-09-06 11:29:03 -04:00
|
|
|
zun-tempest-plugin:
|
|
|
|
repos:
|
|
|
|
- openstack/zun-tempest-plugin
|
Add project Zun to OpenStack big-tent
Addition of the Zun Project - a collaboration of 22 engineers
from 12 affiliations [1] to bring prevailing container technologies
to Openstack.
Zun was originated from Magnum, but became an independent project
based on the decision made by the community in the OpenStack Austin
design summit [2]. After the split, Zun focuses on providing
container service for OpenStack, while Magnum focus on deployment and
management of Container Orchestration Engines (COEs).
Zun aligns with the OpenStack Mission by providing an
OpenStack-native container service backed by various container
technologies. Zun directly builds on existing OpenStack services,
such as Nova, Neutron, Glance, Keystone, Horizon, and potentially
more.
Zun uses the Apache v2.0 license. All library dependencies allow for
unrestricted distribution and deployment.
Our PTL is chosen by our contributors [3], and holds
weekly IRC meetings in #openstack-meeting and are logged [4][5].
We are also available in #openstack-zun to for daily discussion and
as a way for users to interact with our developers openly. Zun
provides a level and open collaboration playing field because it
doesn't benefit a specific vendor or contributors from a specific
vendor.
Zun uses Gerrit for code reviews [6][7][8], and Launchpad [9][10]
for bugs and blueprints tracking. Our code is automatically tested
by the OpenStack CI infrastructure. Our PTL serves as a contact for
cross-project teams in OpenStack.
We make extensive use of existing software, and have not
deliberately reproduced functionality that already exists in other
ecosystem projects. Zun doesn't compete with COEs, such as Swarm,
Mesos, or Kubernetes, but instead offers a pluggable architecture
for vendors to plugin COEs of their choice. We are open to
integrate prevailing container technologies that help OpenStack
operators offer application containers to their cloud users.
Project direction has been publicly discussed at the Austin and
Barcelona OpenStack Design Summit, and in the weekly IRC meetings.
We also use the openstack-dev ML for project discussion.
Zun is compatible with OpenStack APIs, and uses Keystone middleware
to integrate with OpenStack Identity.
There are numerous active contributors from several affiliations who
have significant contribution to Zun.
The Zun team is happy to participate in any goals specified by the
TC, and meet any policies that the TC requires all projects to meet.
[1] http://stackalytics.com/?project_type=openstack-others&release=all&metric=commits&module=zun
[2] https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107887.html
[4] http://eavesdrop.openstack.org/meetings/zun/2016/
[5] http://eavesdrop.openstack.org/meetings/higgins/2016/
[6] https://review.openstack.org/#/q/project:openstack/zun
[7] https://review.openstack.org/#/q/project:openstack/python-zunclient
[8] https://review.openstack.org/#/q/project:openstack/zun-ui
[9] https://launchpad.net/zun
[10] https://launchpad.net/zun-ui
Change-Id: I9c75496f7b40d3fb78e8c4c92e6bfc2bc161e52f
2016-11-24 16:07:49 -06:00
|
|
|
zun-ui:
|
|
|
|
repos:
|
|
|
|
- openstack/zun-ui
|
2024-04-30 03:53:39 +00:00
|
|
|
kuryr:
|
|
|
|
repos:
|
|
|
|
- openstack/kuryr
|
|
|
|
kuryr-libnetwork:
|
|
|
|
repos:
|
|
|
|
- openstack/kuryr-libnetwork
|