2660 lines
72 KiB
YAML
Raw Normal View History

---
Adjutant official project status Adjuant is a service built to help manage certain elements of operations processes by providing micro APIs around complex underlying workflow. It started life as a system to manage sign ups to a public cloud, and grew into a more generic and flexible system into which deployers can add useful APIs for their users to consume. The project history can be found on the docs[1], as well as some guide lines which explain what the scope of the project is, and how we manage how vague it is[2]. Then there is a section going over what is built into Adjutant[3]. The project was built to be integrated from the beginning with OpenStack, using KeystoneMiddleware for auth, will be moving to oslo policy in the near future, and we hope to switch to using the OpenStackSDK for all OpenStack API interactions. Adjutant meets all of the new project requirements easily. The source is Apache License 2.0, and all libraries used are opensource. While the project was started internally at Catalyst we use Launchpad for bug and blueprints tracking and all of us are developers already working upstream with OpenStack. Code review and gated testing is done on the OpenStack infrastructure, and the goal is always to support features in other projects first (see guide-lines doc[2]). We want to build Adjutant to be useful to the community and have taken care to build ways to keep company specific logic out of the service by providing plugin mechanisms. Catalyst Cloud[4] is a company dedicated to opensource, and we always prefer working with others to achieve a goal that works for everyone rather than writing internal only solutions. Almost all our work is opensource where applicable, and we are active in the community. Catalyst Cloud runs Adjutant in production (very near to master), and we know of at least another company using it for password resets, as well as interest from a few more. For Catalyst Cloud it handles our full automated sign up process, and we run almost all of the default Tasks as defined in the core codebase for user management. [1]: https://adjutant.readthedocs.io/en/latest/history.html [2]: https://adjutant.readthedocs.io/en/latest/guide-lines.html [3]: https://adjutant.readthedocs.io/en/latest/features.html [4]: https://catalystcloud.nz/ Change-Id: I0d119fa26b7ed8969870ad0c3f405e0ac3df98e3
2018-03-16 13:41:10 +13:00
adjutant:
ptl:
name: Dale Smith
irc: dalees
email: dale@catalystcloud.nz
appointed:
- victoria
- yoga
- zed
- '2023.1'
Adjutant official project status Adjuant is a service built to help manage certain elements of operations processes by providing micro APIs around complex underlying workflow. It started life as a system to manage sign ups to a public cloud, and grew into a more generic and flexible system into which deployers can add useful APIs for their users to consume. The project history can be found on the docs[1], as well as some guide lines which explain what the scope of the project is, and how we manage how vague it is[2]. Then there is a section going over what is built into Adjutant[3]. The project was built to be integrated from the beginning with OpenStack, using KeystoneMiddleware for auth, will be moving to oslo policy in the near future, and we hope to switch to using the OpenStackSDK for all OpenStack API interactions. Adjutant meets all of the new project requirements easily. The source is Apache License 2.0, and all libraries used are opensource. While the project was started internally at Catalyst we use Launchpad for bug and blueprints tracking and all of us are developers already working upstream with OpenStack. Code review and gated testing is done on the OpenStack infrastructure, and the goal is always to support features in other projects first (see guide-lines doc[2]). We want to build Adjutant to be useful to the community and have taken care to build ways to keep company specific logic out of the service by providing plugin mechanisms. Catalyst Cloud[4] is a company dedicated to opensource, and we always prefer working with others to achieve a goal that works for everyone rather than writing internal only solutions. Almost all our work is opensource where applicable, and we are active in the community. Catalyst Cloud runs Adjutant in production (very near to master), and we know of at least another company using it for password resets, as well as interest from a few more. For Catalyst Cloud it handles our full automated sign up process, and we run almost all of the default Tasks as defined in the core codebase for user management. [1]: https://adjutant.readthedocs.io/en/latest/history.html [2]: https://adjutant.readthedocs.io/en/latest/guide-lines.html [3]: https://adjutant.readthedocs.io/en/latest/features.html [4]: https://catalystcloud.nz/ Change-Id: I0d119fa26b7ed8969870ad0c3f405e0ac3df98e3
2018-03-16 13:41:10 +13:00
irc-channel: openstack-adjutant
service: Operations Processes automation
mission: >
To provide an extensible API framework for exposing to users an
organization's automated business processes relating to account management
across OpenStack and external systems, that can be adapted to the unique
requirements of an organization's processes.
url: http://adjutant.readthedocs.io/
deliverables:
adjutant:
repos:
- openstack/adjutant
adjutant-ui:
repos:
- openstack/adjutant-ui
python-adjutantclient:
repos:
- openstack/python-adjutantclient
barbican:
ptl:
name: Grzegorz Grasza
irc: xek
email: xek@redhat.com
appointed:
- victoria
- xena
irc-channel: openstack-barbican
service: Key Manager service
mission: >
To produce a secret storage and generation system capable of providing key
management for services wishing to enable encryption features.
url: https://wiki.openstack.org/wiki/Barbican
deliverables:
barbican:
repos:
- openstack/barbican
ansible-role-atos-hsm:
repos:
- openstack/ansible-role-atos-hsm
ansible-role-lunasa-hsm:
repos:
- openstack/ansible-role-lunasa-hsm
ansible-role-thales-hsm:
repos:
- openstack/ansible-role-thales-hsm
barbican-specs:
release-management: none
repos:
- openstack/barbican-specs
barbican-tempest-plugin:
repos:
- openstack/barbican-tempest-plugin
barbican-ui:
repos:
- openstack/barbican-ui
python-barbicanclient:
repos:
- openstack/python-barbicanclient
Blazar - Resource Reservation Service The mission of Blazar project[1] is to provide resource reservations in OpenStack clouds for virtual resouces and physical resources. Blazar ensures cloud users can deploy their resources during specific time frames, using reservations. Blazar creates reservations when it recieves requests from cloud users about future usages of resources. It finds out if the clouds can accommodate the usage requests in the reservation time frame. A request can define reservations of various kinds of resources, such as virtual resources (Nova instances or Cinder volumes) as well as physical resources (Nova hypervisor or Cinder storage). The features of Blazar are interesting varied kind of OpenStack clouds users, such as Scientific WGs and Enterprise as well as OPNFV Promise. The active team members are coming from each area, and its development cycle is neutral for all area. Additionally, Blazar tends to solve some problems described in the user story[2] defined by the Product WGs. The development cycle of the team is following OpenStack patterns. The team think operator feedbacks are much important to improve Blazar project and its community. So the priorities for each cycle have been decided based on not only development requests but operator feedbacks. The Blazar code is not implemented from scratch by the current active members. Blazar project was founded at the Icehouse cycle, but inactive between Kilo and Newton. At Barcelona Summit, the current members, some are the real users of Blazar in production and others are operators of NFV who need Blazar, gathered and started to revive Blazar project. Blazar project and the team follow the 4 opens: * Open Source: The Blazar source code[3], specifications[4] and documentation[5] are completely open source. All are released under the Apache 2.0 licence. * Open Design: The team openly works for all activities. For release milestones, the team discussed problems Blazar was facing, work items the team was targeting, and project's priorities in etherpad[6], [7]. Additionally, the team had meetings in both PTG[7] and Boston Summit[8]. * Open Development: All Blazar code is reviewed and hosted in OpenStack CI[9]. The CI supports unit tests and Tempest scenario test. All blueprint and bugs are tracked by Launchpad[10]. * Open Community: All new design and bugs are open in Blazar Launchpad[10]. The team weekly IRC meeting[11] is held in a IRC channel for OpenStack projects and #openstack-blazar channel is open to everyone who want to discuss Blazar-related topics. 1. https://wiki.openstack.org/wiki/Blazar 2. http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/capacity_management.html 3. https://github.com/openstack/blazar 4. https://github.com/openstack/blazar/tree/master/doc/source/devref 5. http://blazar.readthedocs.io/en/latest/ 6. https://etherpad.openstack.org/p/Blazar_status_2016 7. https://etherpad.openstack.org/p/blazar-ptg-pike 8. https://etherpad.openstack.org/p/blazar-boston-summit 9. https://review.openstack.org/#/q/status:open+blazar 10. https://launchpad.net/blazar 11. http://eavesdrop.openstack.org/#Blazar_Team_Meeting Change-Id: I7473dc15b553195dbbe2665ade9ecf6a8bc21839
2017-07-12 18:00:26 +09:00
blazar:
ptl:
name: Pierre Riteau
irc: priteau
email: pierre@stackhpc.com
Blazar - Resource Reservation Service The mission of Blazar project[1] is to provide resource reservations in OpenStack clouds for virtual resouces and physical resources. Blazar ensures cloud users can deploy their resources during specific time frames, using reservations. Blazar creates reservations when it recieves requests from cloud users about future usages of resources. It finds out if the clouds can accommodate the usage requests in the reservation time frame. A request can define reservations of various kinds of resources, such as virtual resources (Nova instances or Cinder volumes) as well as physical resources (Nova hypervisor or Cinder storage). The features of Blazar are interesting varied kind of OpenStack clouds users, such as Scientific WGs and Enterprise as well as OPNFV Promise. The active team members are coming from each area, and its development cycle is neutral for all area. Additionally, Blazar tends to solve some problems described in the user story[2] defined by the Product WGs. The development cycle of the team is following OpenStack patterns. The team think operator feedbacks are much important to improve Blazar project and its community. So the priorities for each cycle have been decided based on not only development requests but operator feedbacks. The Blazar code is not implemented from scratch by the current active members. Blazar project was founded at the Icehouse cycle, but inactive between Kilo and Newton. At Barcelona Summit, the current members, some are the real users of Blazar in production and others are operators of NFV who need Blazar, gathered and started to revive Blazar project. Blazar project and the team follow the 4 opens: * Open Source: The Blazar source code[3], specifications[4] and documentation[5] are completely open source. All are released under the Apache 2.0 licence. * Open Design: The team openly works for all activities. For release milestones, the team discussed problems Blazar was facing, work items the team was targeting, and project's priorities in etherpad[6], [7]. Additionally, the team had meetings in both PTG[7] and Boston Summit[8]. * Open Development: All Blazar code is reviewed and hosted in OpenStack CI[9]. The CI supports unit tests and Tempest scenario test. All blueprint and bugs are tracked by Launchpad[10]. * Open Community: All new design and bugs are open in Blazar Launchpad[10]. The team weekly IRC meeting[11] is held in a IRC channel for OpenStack projects and #openstack-blazar channel is open to everyone who want to discuss Blazar-related topics. 1. https://wiki.openstack.org/wiki/Blazar 2. http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/capacity_management.html 3. https://github.com/openstack/blazar 4. https://github.com/openstack/blazar/tree/master/doc/source/devref 5. http://blazar.readthedocs.io/en/latest/ 6. https://etherpad.openstack.org/p/Blazar_status_2016 7. https://etherpad.openstack.org/p/blazar-ptg-pike 8. https://etherpad.openstack.org/p/blazar-boston-summit 9. https://review.openstack.org/#/q/status:open+blazar 10. https://launchpad.net/blazar 11. http://eavesdrop.openstack.org/#Blazar_Team_Meeting Change-Id: I7473dc15b553195dbbe2665ade9ecf6a8bc21839
2017-07-12 18:00:26 +09:00
irc-channel: openstack-blazar
service: Resource reservation service
mission: >
Blazar's goal is to provide resource reservations in
OpenStack clouds for different resource types, both
virtual (instances, volumes, etc) and physical (hosts,
storage, etc.).
url: https://wiki.openstack.org/wiki/Blazar
deliverables:
blazar:
repos:
- openstack/blazar
blazar-dashboard:
repos:
- openstack/blazar-dashboard
Blazar - Resource Reservation Service The mission of Blazar project[1] is to provide resource reservations in OpenStack clouds for virtual resouces and physical resources. Blazar ensures cloud users can deploy their resources during specific time frames, using reservations. Blazar creates reservations when it recieves requests from cloud users about future usages of resources. It finds out if the clouds can accommodate the usage requests in the reservation time frame. A request can define reservations of various kinds of resources, such as virtual resources (Nova instances or Cinder volumes) as well as physical resources (Nova hypervisor or Cinder storage). The features of Blazar are interesting varied kind of OpenStack clouds users, such as Scientific WGs and Enterprise as well as OPNFV Promise. The active team members are coming from each area, and its development cycle is neutral for all area. Additionally, Blazar tends to solve some problems described in the user story[2] defined by the Product WGs. The development cycle of the team is following OpenStack patterns. The team think operator feedbacks are much important to improve Blazar project and its community. So the priorities for each cycle have been decided based on not only development requests but operator feedbacks. The Blazar code is not implemented from scratch by the current active members. Blazar project was founded at the Icehouse cycle, but inactive between Kilo and Newton. At Barcelona Summit, the current members, some are the real users of Blazar in production and others are operators of NFV who need Blazar, gathered and started to revive Blazar project. Blazar project and the team follow the 4 opens: * Open Source: The Blazar source code[3], specifications[4] and documentation[5] are completely open source. All are released under the Apache 2.0 licence. * Open Design: The team openly works for all activities. For release milestones, the team discussed problems Blazar was facing, work items the team was targeting, and project's priorities in etherpad[6], [7]. Additionally, the team had meetings in both PTG[7] and Boston Summit[8]. * Open Development: All Blazar code is reviewed and hosted in OpenStack CI[9]. The CI supports unit tests and Tempest scenario test. All blueprint and bugs are tracked by Launchpad[10]. * Open Community: All new design and bugs are open in Blazar Launchpad[10]. The team weekly IRC meeting[11] is held in a IRC channel for OpenStack projects and #openstack-blazar channel is open to everyone who want to discuss Blazar-related topics. 1. https://wiki.openstack.org/wiki/Blazar 2. http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/capacity_management.html 3. https://github.com/openstack/blazar 4. https://github.com/openstack/blazar/tree/master/doc/source/devref 5. http://blazar.readthedocs.io/en/latest/ 6. https://etherpad.openstack.org/p/Blazar_status_2016 7. https://etherpad.openstack.org/p/blazar-ptg-pike 8. https://etherpad.openstack.org/p/blazar-boston-summit 9. https://review.openstack.org/#/q/status:open+blazar 10. https://launchpad.net/blazar 11. http://eavesdrop.openstack.org/#Blazar_Team_Meeting Change-Id: I7473dc15b553195dbbe2665ade9ecf6a8bc21839
2017-07-12 18:00:26 +09:00
blazar-nova:
repos:
- openstack/blazar-nova
blazar-specs:
release-management: none
repos:
- openstack/blazar-specs
blazar-tempest-plugin:
repos:
- openstack/blazar-tempest-plugin
Blazar - Resource Reservation Service The mission of Blazar project[1] is to provide resource reservations in OpenStack clouds for virtual resouces and physical resources. Blazar ensures cloud users can deploy their resources during specific time frames, using reservations. Blazar creates reservations when it recieves requests from cloud users about future usages of resources. It finds out if the clouds can accommodate the usage requests in the reservation time frame. A request can define reservations of various kinds of resources, such as virtual resources (Nova instances or Cinder volumes) as well as physical resources (Nova hypervisor or Cinder storage). The features of Blazar are interesting varied kind of OpenStack clouds users, such as Scientific WGs and Enterprise as well as OPNFV Promise. The active team members are coming from each area, and its development cycle is neutral for all area. Additionally, Blazar tends to solve some problems described in the user story[2] defined by the Product WGs. The development cycle of the team is following OpenStack patterns. The team think operator feedbacks are much important to improve Blazar project and its community. So the priorities for each cycle have been decided based on not only development requests but operator feedbacks. The Blazar code is not implemented from scratch by the current active members. Blazar project was founded at the Icehouse cycle, but inactive between Kilo and Newton. At Barcelona Summit, the current members, some are the real users of Blazar in production and others are operators of NFV who need Blazar, gathered and started to revive Blazar project. Blazar project and the team follow the 4 opens: * Open Source: The Blazar source code[3], specifications[4] and documentation[5] are completely open source. All are released under the Apache 2.0 licence. * Open Design: The team openly works for all activities. For release milestones, the team discussed problems Blazar was facing, work items the team was targeting, and project's priorities in etherpad[6], [7]. Additionally, the team had meetings in both PTG[7] and Boston Summit[8]. * Open Development: All Blazar code is reviewed and hosted in OpenStack CI[9]. The CI supports unit tests and Tempest scenario test. All blueprint and bugs are tracked by Launchpad[10]. * Open Community: All new design and bugs are open in Blazar Launchpad[10]. The team weekly IRC meeting[11] is held in a IRC channel for OpenStack projects and #openstack-blazar channel is open to everyone who want to discuss Blazar-related topics. 1. https://wiki.openstack.org/wiki/Blazar 2. http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/capacity_management.html 3. https://github.com/openstack/blazar 4. https://github.com/openstack/blazar/tree/master/doc/source/devref 5. http://blazar.readthedocs.io/en/latest/ 6. https://etherpad.openstack.org/p/Blazar_status_2016 7. https://etherpad.openstack.org/p/blazar-ptg-pike 8. https://etherpad.openstack.org/p/blazar-boston-summit 9. https://review.openstack.org/#/q/status:open+blazar 10. https://launchpad.net/blazar 11. http://eavesdrop.openstack.org/#Blazar_Team_Meeting Change-Id: I7473dc15b553195dbbe2665ade9ecf6a8bc21839
2017-07-12 18:00:26 +09:00
python-blazarclient:
repos:
- openstack/python-blazarclient
cinder:
ptl:
name: Jon Bernard
irc: No nick supplied
email: jobernar@redhat.com
irc-channel: openstack-cinder
service: Block Storage service
mission: >
To implement services and libraries to provide on-demand, self-service
access to Block Storage resources via abstraction and automation on top of
other block storage devices.
url: https://wiki.openstack.org/wiki/Cinder
deliverables:
cinder:
repos:
- openstack/cinder
cinder-specs:
release-management: none
repos:
- openstack/cinder-specs
cinder-tempest-plugin:
repos:
- openstack/cinder-tempest-plugin
cinderlib:
deprecated: '2024.1'
release-management: deprecated
repos:
- openstack/cinderlib
os-brick:
repos:
- openstack/os-brick
python-brick-cinderclient-ext:
repos:
- openstack/python-brick-cinderclient-ext
python-cinderclient:
repos:
- openstack/python-cinderclient
rbd-iscsi-client:
repos:
- openstack/rbd-iscsi-client
CloudKitty application for Big Tent This is a request to add CloudKitty [1] to the Big Tent. CloudKitty is a rating as a Service component aimed at easing integration of OpenStack with existing billing systems. It consumes data from metric systems (ceilometer for example) and with the help of rating plugins and client defined rules, generates prices for the different services. The project is fully licensed under the Apache 2.0 license, and has been actively promoted during the last 2 cycles including a presentation at Vancouver summit, and another on stage presentation. Stéphane Albert is currently acting as PTL. He has been working on the project since the first POC shown at the OpenStack Summit Atlanta. We plan on holding an official PTL election around the Tokyo summit. All code changes are approved by the core members. CloudKitty code is submitted and reviewed using the OpenStack infrastructure and gated through gerrit[2][3][4]. We are starting to use both dev and operators mailing list in order to give the community a way to provide feedback, keep up with our progress and intentions. Meanwhile we have been heavily using IRC for cooperation. The #cloudkitty IRC channel on freenode is logged and the channel is notified of such logging [5]. Community members, core developers and users gather here to discuss about the future of the project as well as questions regarding usage. We have recently booked a slot on #openstack-meeting-3 in for our regular meetings. CloudKitty has been initiated by a single company but since then we have received several patches, and reviews from exterior contributors both originating from different companies and unaffiliated. Actually the whole design of CloudKitty has been improved and decided during regular IRC meetings that have been held by contibutors from many companies in order to reflect their needs and usages. We are aware that CloudKitty has been used in production by several Cloud User (both Cloud Public or Private) for several months. [1]https://wiki.openstack.org/wiki/CloudKitty [2]https://review.openstack.org/#/q/project:stackforge/cloudkitty,n,z [3]https://review.openstack.org/#/q/project:stackforge/python-cloudkittyclient,n,z [4]https://review.openstack.org/#/q/project:stackforge/cloudkitty-dashboard,n,z [5]http://eavesdrop.openstack.org/irclogs/%23cloudkitty/ Change-Id: I1864953db026832e8c40707e212db0ef179fe480
2015-09-18 17:27:54 +02:00
cloudkitty:
ptl:
name: Rafael Weingartner
irc: No nick supplied
email: rafael@apache.org
appointed:
- victoria
- wallaby
CloudKitty application for Big Tent This is a request to add CloudKitty [1] to the Big Tent. CloudKitty is a rating as a Service component aimed at easing integration of OpenStack with existing billing systems. It consumes data from metric systems (ceilometer for example) and with the help of rating plugins and client defined rules, generates prices for the different services. The project is fully licensed under the Apache 2.0 license, and has been actively promoted during the last 2 cycles including a presentation at Vancouver summit, and another on stage presentation. Stéphane Albert is currently acting as PTL. He has been working on the project since the first POC shown at the OpenStack Summit Atlanta. We plan on holding an official PTL election around the Tokyo summit. All code changes are approved by the core members. CloudKitty code is submitted and reviewed using the OpenStack infrastructure and gated through gerrit[2][3][4]. We are starting to use both dev and operators mailing list in order to give the community a way to provide feedback, keep up with our progress and intentions. Meanwhile we have been heavily using IRC for cooperation. The #cloudkitty IRC channel on freenode is logged and the channel is notified of such logging [5]. Community members, core developers and users gather here to discuss about the future of the project as well as questions regarding usage. We have recently booked a slot on #openstack-meeting-3 in for our regular meetings. CloudKitty has been initiated by a single company but since then we have received several patches, and reviews from exterior contributors both originating from different companies and unaffiliated. Actually the whole design of CloudKitty has been improved and decided during regular IRC meetings that have been held by contibutors from many companies in order to reflect their needs and usages. We are aware that CloudKitty has been used in production by several Cloud User (both Cloud Public or Private) for several months. [1]https://wiki.openstack.org/wiki/CloudKitty [2]https://review.openstack.org/#/q/project:stackforge/cloudkitty,n,z [3]https://review.openstack.org/#/q/project:stackforge/python-cloudkittyclient,n,z [4]https://review.openstack.org/#/q/project:stackforge/cloudkitty-dashboard,n,z [5]http://eavesdrop.openstack.org/irclogs/%23cloudkitty/ Change-Id: I1864953db026832e8c40707e212db0ef179fe480
2015-09-18 17:27:54 +02:00
irc-channel: cloudkitty
service: Rating service
mission: >
CloudKitty is a rating component for OpenStack. Its goal is to process data
from different metric backends and implement rating rule creation. Its role
is to fit in-between the raw metrics from OpenStack and the billing system
of a provider for chargeback purposes.
url: https://wiki.openstack.org/wiki/CloudKitty
deliverables:
cloudkitty:
repos:
- openstack/cloudkitty
CloudKitty application for Big Tent This is a request to add CloudKitty [1] to the Big Tent. CloudKitty is a rating as a Service component aimed at easing integration of OpenStack with existing billing systems. It consumes data from metric systems (ceilometer for example) and with the help of rating plugins and client defined rules, generates prices for the different services. The project is fully licensed under the Apache 2.0 license, and has been actively promoted during the last 2 cycles including a presentation at Vancouver summit, and another on stage presentation. Stéphane Albert is currently acting as PTL. He has been working on the project since the first POC shown at the OpenStack Summit Atlanta. We plan on holding an official PTL election around the Tokyo summit. All code changes are approved by the core members. CloudKitty code is submitted and reviewed using the OpenStack infrastructure and gated through gerrit[2][3][4]. We are starting to use both dev and operators mailing list in order to give the community a way to provide feedback, keep up with our progress and intentions. Meanwhile we have been heavily using IRC for cooperation. The #cloudkitty IRC channel on freenode is logged and the channel is notified of such logging [5]. Community members, core developers and users gather here to discuss about the future of the project as well as questions regarding usage. We have recently booked a slot on #openstack-meeting-3 in for our regular meetings. CloudKitty has been initiated by a single company but since then we have received several patches, and reviews from exterior contributors both originating from different companies and unaffiliated. Actually the whole design of CloudKitty has been improved and decided during regular IRC meetings that have been held by contibutors from many companies in order to reflect their needs and usages. We are aware that CloudKitty has been used in production by several Cloud User (both Cloud Public or Private) for several months. [1]https://wiki.openstack.org/wiki/CloudKitty [2]https://review.openstack.org/#/q/project:stackforge/cloudkitty,n,z [3]https://review.openstack.org/#/q/project:stackforge/python-cloudkittyclient,n,z [4]https://review.openstack.org/#/q/project:stackforge/cloudkitty-dashboard,n,z [5]http://eavesdrop.openstack.org/irclogs/%23cloudkitty/ Change-Id: I1864953db026832e8c40707e212db0ef179fe480
2015-09-18 17:27:54 +02:00
python-cloudkittyclient:
repos:
- openstack/python-cloudkittyclient
CloudKitty application for Big Tent This is a request to add CloudKitty [1] to the Big Tent. CloudKitty is a rating as a Service component aimed at easing integration of OpenStack with existing billing systems. It consumes data from metric systems (ceilometer for example) and with the help of rating plugins and client defined rules, generates prices for the different services. The project is fully licensed under the Apache 2.0 license, and has been actively promoted during the last 2 cycles including a presentation at Vancouver summit, and another on stage presentation. Stéphane Albert is currently acting as PTL. He has been working on the project since the first POC shown at the OpenStack Summit Atlanta. We plan on holding an official PTL election around the Tokyo summit. All code changes are approved by the core members. CloudKitty code is submitted and reviewed using the OpenStack infrastructure and gated through gerrit[2][3][4]. We are starting to use both dev and operators mailing list in order to give the community a way to provide feedback, keep up with our progress and intentions. Meanwhile we have been heavily using IRC for cooperation. The #cloudkitty IRC channel on freenode is logged and the channel is notified of such logging [5]. Community members, core developers and users gather here to discuss about the future of the project as well as questions regarding usage. We have recently booked a slot on #openstack-meeting-3 in for our regular meetings. CloudKitty has been initiated by a single company but since then we have received several patches, and reviews from exterior contributors both originating from different companies and unaffiliated. Actually the whole design of CloudKitty has been improved and decided during regular IRC meetings that have been held by contibutors from many companies in order to reflect their needs and usages. We are aware that CloudKitty has been used in production by several Cloud User (both Cloud Public or Private) for several months. [1]https://wiki.openstack.org/wiki/CloudKitty [2]https://review.openstack.org/#/q/project:stackforge/cloudkitty,n,z [3]https://review.openstack.org/#/q/project:stackforge/python-cloudkittyclient,n,z [4]https://review.openstack.org/#/q/project:stackforge/cloudkitty-dashboard,n,z [5]http://eavesdrop.openstack.org/irclogs/%23cloudkitty/ Change-Id: I1864953db026832e8c40707e212db0ef179fe480
2015-09-18 17:27:54 +02:00
cloudkitty-dashboard:
repos:
- openstack/cloudkitty-dashboard
cloudkitty-specs:
release-management: none
repos:
- openstack/cloudkitty-specs
cloudkitty-tempest-plugin:
repos:
- openstack/cloudkitty-tempest-plugin
Cyborg Application For Official Projects First of all thanks for TC audience for our project update at PTG. The overview slide presented at the meeting could be found here[0]. **What is Cyborg** Cyborg is a new project that aims to provide a general management framework for accelerators[1], such as FPGA, GPU, ARM SoCs, NVMe SSDs,DPDK/SPDK,eBPF/XDP and so forth, basically any types of dedicated hardware/software that designed specifically to accelerate the performance of the general computing infrastructure. **Motivation** The motivation behind Cyborg projects orginally came from NFV in Telco, and later on more from Scientific WG on HPC and public cloud providers who want to provide FPGA/GPU instances. Details of the requirement for Cyborg could be found in the slide[0] **Two Important Facts** - Cyborg is started from scratch within the community, which means there was no code dump from vendor and we wrote every specs and code implement- ation from zero in an OpenStack way. - Every decision in Cyborg project is done through election/voting, even the project name was voted from team members. **One More Fun Fact** * We have the LONGEST team meeting in OpenStack (meeting starts late in China timezone and often left open since the organizer passed out on desk, we are working hard to improve on that :) ) **Functionality** Cyborg's two main jobs are resource discovery and life cycle management, which means Cyborg leaves the functionality of scheduling entirely to Nova when the user needs a virtual machine deployment, or Kubernetes/Mesos /Docker if the user needs container solution running on baremetal. **Cross Projects Relationship** As mentioned above, Cyborg interacts with Nova via Placement. Cyborg does not reinvent any wheels or tries to drop in to replace any current modules. It heavily relies on working with the current OpenStack structure. Nova's FPGA/GPU support is very critical for Cyborg to be able to work for VM related deployments. In short, after cyborg-agent performed resource discovery operation via various drivers it connects to, it reports the accelerator resource info to Placement. Placement will then provide an aggregate view of both general and accelerator resource for Nova scheduler. When Nova successfully spawn an instance on a compute node with accelerator, nova will notify cyborg and cyborg-api will notify cyborg-db to change the resource status from "available" to "in use", similar to the Nova-Cinder interaction for volume. Cyborg has constant discussion with the Nova/Placement team and we will continue to do so in Queens and following cycles to make sure we got an awesome collaboration. Cyborg has also very good relationship with the Scientific WG and OPNFV DPACC project. We gathered inputs/requirements from these groups. **Scenario/Usage** To put it very simply, if you have just one type of accelerator, you could just use what Nova provides for PCI-passthrough functionalities. If you have more than one types of accelerators, or you have one type of accelerator but you want fine grained control, then you probably needs Cyborg on the side to help you manage them, working alongside Nova. **Community Principle Compliance** Cyborg project follows the core principle of "Four Opens" within OpenStack Community: * Open Source This goes without saying, all the Cyborg source code could be found at OpenStack repo[2] or its github mirror[3], including feature codes, documentations[4] and tests[5]. Apache Lisence 2.0 is applied as usual. * Open Design As mentioned above, Cyborg is designed from ground up within the community. We have weekly team meetings[6] at #openstack-cyborg and public available/ searchable meeting logs[7]. We have organized PTG design sessions for each of the past ones[8][9]. We have also used [acceleration] or [cyborg] for mailinglist discussion. * Open Community Cyborg has a truely diverse team composition with no single vendor dominance[10]. PTL of Cyborg comes from Huawei and current two core reviewers come from RedHat and Lenovo. We have also gone through election/voting for these positions[11][12]. * Open Development All of the Cyborg pacthes could be found here[13]. We have a implicit policy of the core that landed the patch should comes from a different company of the ones that submit the patches. We try to review the patches as thorough as possible and we are honored to have people from other team help reviewing, for example[14][15]. Cyborg team also tries to utilize all the available community infrastructure, e.g CI/CD[16] The Cyborg project team has been doing everything according to the community's tradition and best practices. Although it is far from perfect, we believe it is on the right track. We also believe that by putting Cyborg under official governance, it will show OpenStack's capability on heterogeneous computing that provides the foundation for AI, Machine Learning, HPC, NFV and many other cutting-edge technologies. 0. https://docs.google.com/presentation/d/1RyDDVMBsQndN-Qo_JInnHaj_oCY6zwT_3ky_gk1nJMo/edit?usp=sharing 1. https://wiki.openstack.org/wiki/Cyborg 2. https://review.openstack.org/gitweb?p=openstack/cyborg.git;a=summary 3. https://github.com/openstack/cyborg 4. https://github.com/openstack/cyborg/tree/master/doc/source 5. https://github.com/openstack/cyborg/tree/master/cyborg/tests 6. https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting 7. https://wiki.openstack.org/wiki/Cyborg/MeetingLogs 8. https://etherpad.openstack.org/p/cyborg-ptg-pike 9. https://etherpad.openstack.org/p/cyborg-queens-ptg 10. http://stackalytics.com/?project_type=openstack-others&module=cyborg&metric=person-day&release=all 11. https://openstack.nimeyo.com/113991/openstack-cyborg-nominate-rushil-justin-kilpatrick-reviewers 12. https://openstack.nimeyo.com/117806/openstack-dev-cyborg-queens-ptl-candidacy?show=117812 13. https://review.openstack.org/#/q/project:openstack/cyborg 14. https://review.openstack.org/#/c/448228/ 15. https://review.openstack.org/#/c/445814/ 16. https://review.openstack.org/#/c/504141/ Change-Id: If0679c80f72f48cbe5fbc50c0a182661e97793fe Signed-off-by: zhipengh <huangzhipeng@huawei.com>
2017-09-13 16:04:01 -06:00
cyborg:
ptl:
name: alex song
irc: songwenping
email: songwenping@inspur.com
appointed:
- ussuri
- xena
- '2024.1'
Cyborg Application For Official Projects First of all thanks for TC audience for our project update at PTG. The overview slide presented at the meeting could be found here[0]. **What is Cyborg** Cyborg is a new project that aims to provide a general management framework for accelerators[1], such as FPGA, GPU, ARM SoCs, NVMe SSDs,DPDK/SPDK,eBPF/XDP and so forth, basically any types of dedicated hardware/software that designed specifically to accelerate the performance of the general computing infrastructure. **Motivation** The motivation behind Cyborg projects orginally came from NFV in Telco, and later on more from Scientific WG on HPC and public cloud providers who want to provide FPGA/GPU instances. Details of the requirement for Cyborg could be found in the slide[0] **Two Important Facts** - Cyborg is started from scratch within the community, which means there was no code dump from vendor and we wrote every specs and code implement- ation from zero in an OpenStack way. - Every decision in Cyborg project is done through election/voting, even the project name was voted from team members. **One More Fun Fact** * We have the LONGEST team meeting in OpenStack (meeting starts late in China timezone and often left open since the organizer passed out on desk, we are working hard to improve on that :) ) **Functionality** Cyborg's two main jobs are resource discovery and life cycle management, which means Cyborg leaves the functionality of scheduling entirely to Nova when the user needs a virtual machine deployment, or Kubernetes/Mesos /Docker if the user needs container solution running on baremetal. **Cross Projects Relationship** As mentioned above, Cyborg interacts with Nova via Placement. Cyborg does not reinvent any wheels or tries to drop in to replace any current modules. It heavily relies on working with the current OpenStack structure. Nova's FPGA/GPU support is very critical for Cyborg to be able to work for VM related deployments. In short, after cyborg-agent performed resource discovery operation via various drivers it connects to, it reports the accelerator resource info to Placement. Placement will then provide an aggregate view of both general and accelerator resource for Nova scheduler. When Nova successfully spawn an instance on a compute node with accelerator, nova will notify cyborg and cyborg-api will notify cyborg-db to change the resource status from "available" to "in use", similar to the Nova-Cinder interaction for volume. Cyborg has constant discussion with the Nova/Placement team and we will continue to do so in Queens and following cycles to make sure we got an awesome collaboration. Cyborg has also very good relationship with the Scientific WG and OPNFV DPACC project. We gathered inputs/requirements from these groups. **Scenario/Usage** To put it very simply, if you have just one type of accelerator, you could just use what Nova provides for PCI-passthrough functionalities. If you have more than one types of accelerators, or you have one type of accelerator but you want fine grained control, then you probably needs Cyborg on the side to help you manage them, working alongside Nova. **Community Principle Compliance** Cyborg project follows the core principle of "Four Opens" within OpenStack Community: * Open Source This goes without saying, all the Cyborg source code could be found at OpenStack repo[2] or its github mirror[3], including feature codes, documentations[4] and tests[5]. Apache Lisence 2.0 is applied as usual. * Open Design As mentioned above, Cyborg is designed from ground up within the community. We have weekly team meetings[6] at #openstack-cyborg and public available/ searchable meeting logs[7]. We have organized PTG design sessions for each of the past ones[8][9]. We have also used [acceleration] or [cyborg] for mailinglist discussion. * Open Community Cyborg has a truely diverse team composition with no single vendor dominance[10]. PTL of Cyborg comes from Huawei and current two core reviewers come from RedHat and Lenovo. We have also gone through election/voting for these positions[11][12]. * Open Development All of the Cyborg pacthes could be found here[13]. We have a implicit policy of the core that landed the patch should comes from a different company of the ones that submit the patches. We try to review the patches as thorough as possible and we are honored to have people from other team help reviewing, for example[14][15]. Cyborg team also tries to utilize all the available community infrastructure, e.g CI/CD[16] The Cyborg project team has been doing everything according to the community's tradition and best practices. Although it is far from perfect, we believe it is on the right track. We also believe that by putting Cyborg under official governance, it will show OpenStack's capability on heterogeneous computing that provides the foundation for AI, Machine Learning, HPC, NFV and many other cutting-edge technologies. 0. https://docs.google.com/presentation/d/1RyDDVMBsQndN-Qo_JInnHaj_oCY6zwT_3ky_gk1nJMo/edit?usp=sharing 1. https://wiki.openstack.org/wiki/Cyborg 2. https://review.openstack.org/gitweb?p=openstack/cyborg.git;a=summary 3. https://github.com/openstack/cyborg 4. https://github.com/openstack/cyborg/tree/master/doc/source 5. https://github.com/openstack/cyborg/tree/master/cyborg/tests 6. https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting 7. https://wiki.openstack.org/wiki/Cyborg/MeetingLogs 8. https://etherpad.openstack.org/p/cyborg-ptg-pike 9. https://etherpad.openstack.org/p/cyborg-queens-ptg 10. http://stackalytics.com/?project_type=openstack-others&module=cyborg&metric=person-day&release=all 11. https://openstack.nimeyo.com/113991/openstack-cyborg-nominate-rushil-justin-kilpatrick-reviewers 12. https://openstack.nimeyo.com/117806/openstack-dev-cyborg-queens-ptl-candidacy?show=117812 13. https://review.openstack.org/#/q/project:openstack/cyborg 14. https://review.openstack.org/#/c/448228/ 15. https://review.openstack.org/#/c/445814/ 16. https://review.openstack.org/#/c/504141/ Change-Id: If0679c80f72f48cbe5fbc50c0a182661e97793fe Signed-off-by: zhipengh <huangzhipeng@huawei.com>
2017-09-13 16:04:01 -06:00
irc-channel: openstack-cyborg
service: Accelerator Life Cycle Management
mission: >
To provide a general management framework for accelerators (FPGA,GPU,SoC,
NVMe SSD,DPDK/SPDK,eBPF/XDP ...)
url: https://wiki.openstack.org/wiki/Cyborg
deliverables:
cyborg:
repos:
- openstack/cyborg
cyborg-specs:
release-management: none
repos:
- openstack/cyborg-specs
python-cyborgclient:
repos:
- openstack/python-cyborgclient
cyborg-tempest-plugin:
repos:
- openstack/cyborg-tempest-plugin
designate:
ptl:
name: Michael Johnson
irc: johnsom
email: johnsomor@gmail.com
appointed:
- ussuri
service: DNS service
irc-channel: openstack-dns
mission: >
To provide scalable, on demand, self service access to authoritative DNS
services, in technology-agnostic manner.
url: https://wiki.openstack.org/wiki/Designate
deliverables:
designate:
repos:
- openstack/designate
designate-dashboard:
repos:
- openstack/designate-dashboard
designate-specs:
release-management: none
repos:
- openstack/designate-specs
designate-tempest-plugin:
repos:
- openstack/designate-tempest-plugin
python-designateclient:
repos:
- openstack/python-designateclient
Freezer application to join the Big Tent Mission and Vision: To provide tools for managing backup/restore and disaster recovery. Full version available here [0]. The temporary PTL is Fausto Marzi. Starting from April a new PTL will be elected by using the standard OpenStack mechanism and timeline [1] The #openstack-freezer IRC channel on freenode is logged and the channel is notified of such logging [2]. Our public meeting logs are stored [3]. Many design aspect has been defined at the Vancouver Summit [4]. The project uses public code reviews on the OpenStack infrastructure. A part the first 2-3 months of the project creation, Freezer has been using the +2/+A process requiring two unique core reviewers to review a patch. Our gating runs pep8, pylint and other unittest and integration tests. In the last weeks a Devstack plugin was also implemented [5]. Currently there is a dedicated Team working on Freezer from one of the top 3 Company contributors in OpenStack. There are also external contributors to the project [6]. As this is a new project the number of contributors is not huge, but there is a strong interest everywhere and the feedback from the community was always positive and encouraging. Freezer uses IRC and OpenStack related ML for all the communications. We are doing all our best to make this project useful and successfull for the whole OpenStack Community. Freezer Project repositories: - https://github.com/openstack/freezer/ - https://github.com/openstack/freezer-api - https://github.com/openstack/freezer-web-ui [0] https://etherpad.openstack.org/p/freezer-manifesto [1] https://wiki.openstack.org/wiki/PTL_Elections_April_2015 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-freezer/ [3] https://etherpad.openstack.org/p/freezer_meetings [4] https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/freezer-the-openstack-back-up-as-a-service-platform [5] https://review.openstack.org/#/c/233683/ [6] http://stackalytics.com/?project_type=openstack-others&release=all&metric=commits&module=freezer Change-Id: Ice43efe6f1e6691d44730ec38f38479780af49a8
2015-10-27 11:04:42 +00:00
freezer:
leadership_type: distributed
liaisons:
release:
- name: Dmitriy Rabotyagov
irc: noonedeadpunk
email: noonedeadpunk@gmail.com
tact-sig:
- name: Dmitriy Rabotyagov
irc: noonedeadpunk
email: noonedeadpunk@gmail.com
security:
- name: Alvaro Soto
irc: khyr0n
email: alsotoes@gmail.com
tc-liaison:
- name: Ghanshyam Mann
irc: gmann
email: gmann@ghanshyammann.com
- name: Dmitriy Rabotyagov
irc: noonedeadpunk
email: noonedeadpunk@gmail.com
appointed:
- stein
- zed
Freezer application to join the Big Tent Mission and Vision: To provide tools for managing backup/restore and disaster recovery. Full version available here [0]. The temporary PTL is Fausto Marzi. Starting from April a new PTL will be elected by using the standard OpenStack mechanism and timeline [1] The #openstack-freezer IRC channel on freenode is logged and the channel is notified of such logging [2]. Our public meeting logs are stored [3]. Many design aspect has been defined at the Vancouver Summit [4]. The project uses public code reviews on the OpenStack infrastructure. A part the first 2-3 months of the project creation, Freezer has been using the +2/+A process requiring two unique core reviewers to review a patch. Our gating runs pep8, pylint and other unittest and integration tests. In the last weeks a Devstack plugin was also implemented [5]. Currently there is a dedicated Team working on Freezer from one of the top 3 Company contributors in OpenStack. There are also external contributors to the project [6]. As this is a new project the number of contributors is not huge, but there is a strong interest everywhere and the feedback from the community was always positive and encouraging. Freezer uses IRC and OpenStack related ML for all the communications. We are doing all our best to make this project useful and successfull for the whole OpenStack Community. Freezer Project repositories: - https://github.com/openstack/freezer/ - https://github.com/openstack/freezer-api - https://github.com/openstack/freezer-web-ui [0] https://etherpad.openstack.org/p/freezer-manifesto [1] https://wiki.openstack.org/wiki/PTL_Elections_April_2015 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-freezer/ [3] https://etherpad.openstack.org/p/freezer_meetings [4] https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/freezer-the-openstack-back-up-as-a-service-platform [5] https://review.openstack.org/#/c/233683/ [6] http://stackalytics.com/?project_type=openstack-others&release=all&metric=commits&module=freezer Change-Id: Ice43efe6f1e6691d44730ec38f38479780af49a8
2015-10-27 11:04:42 +00:00
irc-channel: openstack-freezer
service: Backup, Restore, and Disaster Recovery service
Freezer application to join the Big Tent Mission and Vision: To provide tools for managing backup/restore and disaster recovery. Full version available here [0]. The temporary PTL is Fausto Marzi. Starting from April a new PTL will be elected by using the standard OpenStack mechanism and timeline [1] The #openstack-freezer IRC channel on freenode is logged and the channel is notified of such logging [2]. Our public meeting logs are stored [3]. Many design aspect has been defined at the Vancouver Summit [4]. The project uses public code reviews on the OpenStack infrastructure. A part the first 2-3 months of the project creation, Freezer has been using the +2/+A process requiring two unique core reviewers to review a patch. Our gating runs pep8, pylint and other unittest and integration tests. In the last weeks a Devstack plugin was also implemented [5]. Currently there is a dedicated Team working on Freezer from one of the top 3 Company contributors in OpenStack. There are also external contributors to the project [6]. As this is a new project the number of contributors is not huge, but there is a strong interest everywhere and the feedback from the community was always positive and encouraging. Freezer uses IRC and OpenStack related ML for all the communications. We are doing all our best to make this project useful and successfull for the whole OpenStack Community. Freezer Project repositories: - https://github.com/openstack/freezer/ - https://github.com/openstack/freezer-api - https://github.com/openstack/freezer-web-ui [0] https://etherpad.openstack.org/p/freezer-manifesto [1] https://wiki.openstack.org/wiki/PTL_Elections_April_2015 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-freezer/ [3] https://etherpad.openstack.org/p/freezer_meetings [4] https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/freezer-the-openstack-back-up-as-a-service-platform [5] https://review.openstack.org/#/c/233683/ [6] http://stackalytics.com/?project_type=openstack-others&release=all&metric=commits&module=freezer Change-Id: Ice43efe6f1e6691d44730ec38f38479780af49a8
2015-10-27 11:04:42 +00:00
mission: >
To provide integrated tools for backing up and restoring cloud data in
multiple use cases, including disaster recovery. These resources include
file systems, server instances, volumes, and databases.
Freezer application to join the Big Tent Mission and Vision: To provide tools for managing backup/restore and disaster recovery. Full version available here [0]. The temporary PTL is Fausto Marzi. Starting from April a new PTL will be elected by using the standard OpenStack mechanism and timeline [1] The #openstack-freezer IRC channel on freenode is logged and the channel is notified of such logging [2]. Our public meeting logs are stored [3]. Many design aspect has been defined at the Vancouver Summit [4]. The project uses public code reviews on the OpenStack infrastructure. A part the first 2-3 months of the project creation, Freezer has been using the +2/+A process requiring two unique core reviewers to review a patch. Our gating runs pep8, pylint and other unittest and integration tests. In the last weeks a Devstack plugin was also implemented [5]. Currently there is a dedicated Team working on Freezer from one of the top 3 Company contributors in OpenStack. There are also external contributors to the project [6]. As this is a new project the number of contributors is not huge, but there is a strong interest everywhere and the feedback from the community was always positive and encouraging. Freezer uses IRC and OpenStack related ML for all the communications. We are doing all our best to make this project useful and successfull for the whole OpenStack Community. Freezer Project repositories: - https://github.com/openstack/freezer/ - https://github.com/openstack/freezer-api - https://github.com/openstack/freezer-web-ui [0] https://etherpad.openstack.org/p/freezer-manifesto [1] https://wiki.openstack.org/wiki/PTL_Elections_April_2015 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-freezer/ [3] https://etherpad.openstack.org/p/freezer_meetings [4] https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/freezer-the-openstack-back-up-as-a-service-platform [5] https://review.openstack.org/#/c/233683/ [6] http://stackalytics.com/?project_type=openstack-others&release=all&metric=commits&module=freezer Change-Id: Ice43efe6f1e6691d44730ec38f38479780af49a8
2015-10-27 11:04:42 +00:00
url: https://wiki.openstack.org/wiki/Freezer
deliverables:
freezer:
repos:
- openstack/freezer
- openstack/freezer-api
freezer-specs:
release-management: none
repos:
- openstack/freezer-specs
freezer-tempest-plugin:
repos:
- openstack/freezer-tempest-plugin
Freezer application to join the Big Tent Mission and Vision: To provide tools for managing backup/restore and disaster recovery. Full version available here [0]. The temporary PTL is Fausto Marzi. Starting from April a new PTL will be elected by using the standard OpenStack mechanism and timeline [1] The #openstack-freezer IRC channel on freenode is logged and the channel is notified of such logging [2]. Our public meeting logs are stored [3]. Many design aspect has been defined at the Vancouver Summit [4]. The project uses public code reviews on the OpenStack infrastructure. A part the first 2-3 months of the project creation, Freezer has been using the +2/+A process requiring two unique core reviewers to review a patch. Our gating runs pep8, pylint and other unittest and integration tests. In the last weeks a Devstack plugin was also implemented [5]. Currently there is a dedicated Team working on Freezer from one of the top 3 Company contributors in OpenStack. There are also external contributors to the project [6]. As this is a new project the number of contributors is not huge, but there is a strong interest everywhere and the feedback from the community was always positive and encouraging. Freezer uses IRC and OpenStack related ML for all the communications. We are doing all our best to make this project useful and successfull for the whole OpenStack Community. Freezer Project repositories: - https://github.com/openstack/freezer/ - https://github.com/openstack/freezer-api - https://github.com/openstack/freezer-web-ui [0] https://etherpad.openstack.org/p/freezer-manifesto [1] https://wiki.openstack.org/wiki/PTL_Elections_April_2015 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-freezer/ [3] https://etherpad.openstack.org/p/freezer_meetings [4] https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/freezer-the-openstack-back-up-as-a-service-platform [5] https://review.openstack.org/#/c/233683/ [6] http://stackalytics.com/?project_type=openstack-others&release=all&metric=commits&module=freezer Change-Id: Ice43efe6f1e6691d44730ec38f38479780af49a8
2015-10-27 11:04:42 +00:00
freezer-web-ui:
repos:
- openstack/freezer-web-ui
python-freezerclient:
repos:
- openstack/python-freezerclient
glance:
ptl:
name: Pranali Deore
irc: pdeore
email: pdeore@redhat.com
irc-channel: openstack-glance
service: Image service
mission: >
To provide services and associated libraries to store, browse, share,
distribute and manage bootable disk images, other data closely associated
with initializing compute resources, and metadata definitions.
url: https://wiki.openstack.org/wiki/Glance
deliverables:
glance:
repos:
- openstack/glance
glance-specs:
release-management: none
repos:
- openstack/glance-specs
glance-tempest-plugin:
repos:
- openstack/glance-tempest-plugin
glance-store:
repos:
- openstack/glance_store
os-test-images:
repos:
- openstack/os-test-images
python-glanceclient:
repos:
- openstack/python-glanceclient
heat:
ptl:
name: Takashi Kajinami
irc: tkajinam
email: kajinamit@oss.nttdata.com
appointed:
- yoga
- zed
irc-channel: heat
service: Orchestration service
mission: >
To orchestrate composite cloud applications using a declarative
template format through an OpenStack-native REST API.
url: https://wiki.openstack.org/wiki/Heat
deliverables:
heat:
repos:
- openstack/heat
heat-agents:
repos:
- openstack/heat-agents
heat-cfntools:
repos:
- openstack/heat-cfntools
heat-dashboard:
repos:
- openstack/heat-dashboard
heat-specs:
release-management: none
repos:
- openstack/heat-specs
heat-tempest-plugin:
repos:
- openstack/heat-tempest-plugin
heat-templates:
release-management: none
repos:
- openstack/heat-templates
python-heatclient:
repos:
- openstack/python-heatclient
os-apply-config:
repos:
- openstack/os-apply-config
os-collect-config:
repos:
- openstack/os-collect-config
os-refresh-config:
repos:
- openstack/os-refresh-config
yaql:
repos:
- openstack/yaql
horizon:
ptl:
name: Tatiana Ovchinnikova
irc: tmazur
email: t.v.ovtchinnikova@gmail.com
irc-channel: openstack-horizon
service: Dashboard
mission: >
To provide an extensible unified web based user interface for all
OpenStack services.
url: https://wiki.openstack.org/wiki/Horizon
deliverables:
horizon:
repos:
- openstack/horizon
ui-cookiecutter:
release-management: none
repos:
- openstack/ui-cookiecutter
xstatic-angular:
repos:
- openstack/xstatic-angular
xstatic-angular-bootstrap:
repos:
- openstack/xstatic-angular-bootstrap
xstatic-angular-fileupload:
repos:
- openstack/xstatic-angular-fileupload
xstatic-angular-gettext:
repos:
- openstack/xstatic-angular-gettext
xstatic-angular-lrdragndrop:
repos:
- openstack/xstatic-angular-lrdragndrop
xstatic-angular-material:
repos:
- openstack/xstatic-angular-material
xstatic-angular-notify:
repos:
- openstack/xstatic-angular-notify
xstatic-angular-smart-table:
repos:
- openstack/xstatic-angular-smart-table
xstatic-angular-uuid:
repos:
- openstack/xstatic-angular-uuid
xstatic-angular-vis:
repos:
- openstack/xstatic-angular-vis
xstatic-bootstrap-datepicker:
repos:
- openstack/xstatic-bootstrap-datepicker
xstatic-bootstrap-scss:
repos:
- openstack/xstatic-bootstrap-scss
xstatic-bootswatch:
repos:
- openstack/xstatic-bootswatch
xstatic-d3:
repos:
- openstack/xstatic-d3
xstatic-hogan:
repos:
- openstack/xstatic-hogan
xstatic-filesaver:
repos:
- openstack/xstatic-filesaver
xstatic-jasmine:
repos:
- openstack/xstatic-jasmine
xstatic-jquery-migrate:
repos:
- openstack/xstatic-jquery-migrate
xstatic-jquery.quicksearch:
repos:
- openstack/xstatic-jquery.quicksearch
xstatic-jquery.tablesorter:
repos:
- openstack/xstatic-jquery.tablesorter
xstatic-js-yaml:
repos:
- openstack/xstatic-js-yaml
xstatic-jsencrypt:
repos:
- openstack/xstatic-jsencrypt
xstatic-json2yaml:
repos:
- openstack/xstatic-json2yaml
xstatic-magic-search:
repos:
- openstack/xstatic-magic-search
xstatic-mdi:
repos:
- openstack/xstatic-mdi
xstatic-rickshaw:
repos:
- openstack/xstatic-rickshaw
xstatic-roboto-fontface:
repos:
- openstack/xstatic-roboto-fontface
xstatic-spin:
repos:
- openstack/xstatic-spin
ironic:
ptl:
name: Riccardo Pittau
irc: rpittau
email: elfosardo@gmail.com
irc-channel: openstack-ironic
service: Bare Metal service
mission: >
To produce an OpenStack service and associated libraries capable of
managing and provisioning physical machines, and to do this in a
security-aware and fault-tolerant manner.
url: https://wiki.openstack.org/wiki/Ironic
deliverables:
bifrost:
repos:
- openstack/bifrost
ironic:
repos:
- openstack/ironic
ironic-inspector:
repos:
- openstack/ironic-inspector
ironic-inspector-specs:
release-management: none
repos:
- openstack/ironic-inspector-specs
ironic-lib:
repos:
- openstack/ironic-lib
ironic-prometheus-exporter:
repos:
- openstack/ironic-prometheus-exporter
ironic-python-agent:
repos:
- openstack/ironic-python-agent
ironic-python-agent-builder:
repos:
- openstack/ironic-python-agent-builder
ironic-specs:
release-management: none
repos:
- openstack/ironic-specs
ironic-tempest-plugin:
repos:
- openstack/ironic-tempest-plugin
ironic-ui:
repos:
- openstack/ironic-ui
metalsmith:
repos:
- openstack/metalsmith
molteniron:
release-management: none
repos:
- openstack/molteniron
networking-baremetal:
repos:
- openstack/networking-baremetal
networking-generic-switch:
repos:
- openstack/networking-generic-switch
python-ironic-inspector-client:
repos:
- openstack/python-ironic-inspector-client
python-ironicclient:
repos:
- openstack/python-ironicclient
sushy:
repos:
- openstack/sushy
sushy-tools:
repos:
- openstack/sushy-tools
tenks:
repos:
- openstack/tenks
virtualbmc:
repos:
- openstack/virtualbmc
virtualpdu:
repos:
- openstack/virtualpdu
keystone:
ptl:
name: Dave Wilde
irc: d34dh0r53
email: dwilde@redhat.com
appointed:
- '2023.1'
irc-channel: openstack-keystone
service: Identity service
mission: >
To facilitate API client authentication, service discovery, distributed
multi-tenant authorization, and auditing.
url: https://wiki.openstack.org/wiki/Keystone
deliverables:
keystone:
repos:
- openstack/keystone
keystone-specs:
release-management: none
repos:
- openstack/keystone-specs
keystone-tempest-plugin:
repos:
- openstack/keystone-tempest-plugin
keystoneauth:
repos:
- openstack/keystoneauth
keystonemiddleware:
repos:
- openstack/keystonemiddleware
pycadf:
repos:
- openstack/pycadf
python-keystoneclient:
repos:
- openstack/python-keystoneclient
ldappool:
repos:
- openstack/ldappool
kolla:
ptl:
name: Michal Nasiadka
irc: mnasiadka
email: mnasiadka@gmail.com
irc-channel: openstack-kolla
service: Containerised deployment of OpenStack
Kolla application for Big Tent Kolla spent 4 IRC meeting hours defining the Kolla mission here [1]. Our mission is clear and well defined. Kolla provides production-ready containers and deployment tools for operating OpenStack clouds. The project uses the ASL 2.0 as declared in the code base [2]. The PTL was chosen by election to be Steven Dake using the standard OpenStack mechanism [3]. The #kolla IRC channel on freenode is logged and the channel is notified of such logging [4]. Our public meeting schedule was decided by vote at [5] and all meeting logs are stored at our wiki. The project uses public code reviews on the OpenStack infrastructure. Kolla has been using the +2/+A process requiring two unique core reviewers to review a patch since project inception. Our project does not permit exceptions to the +2/+A process and we have never approved a patch without two core reviews. Our gating runs pep8, bashate, and other YAML/JSON syntax checking. The Kolla team builds all images as a functional CI job to validate our builds. In nearly every IRC meeting for the Liberty cycle continuous integration has been our second topic of discussion. The PTL serves as liaison between the Kolla community and other projects for example coordinating our container content with James Slagle, the TripleO PTL. Because Kolla deploys most OpenStack namespaced server projects, we must coordinate and cooperate with nearly every server project in OpenStack. The project has openly worked with members of the TripleO community to integrate kolla/containers into TripleO. Several PoC patches exist demonstrating TripleO and Kolla integration [6]. During Paris ODS, we had a one hour design session recorded here [7]. The Kolla team had an open design session at Vancouver ODS during Ansible Collaboration day. TripleO hosted several Kolla design sessions at Vancouver ODS. We discuss all work in the open, mostly using IRC. Based upon open communication between the Kolla core reveiwer and developement team, the PTL presented the long-term vision for Kolla during Liberty ODS [8]. Kolla uses IRC very heavily because everyone is very active on IRC. As a result the Kolla core reviewer team often makes quick decisions. Our core reviewer team is distributed around the world, so we use the mailing list to communicate long term planning objectives. A recent example of such is the recently held Kolla-Palooza midcycle meetup on July 28th and July 29th, 2015. We use the mailing list by prefixing e-mails to openstack-dev@lists.openstack.org with [kolla] for long term planning rather than short term decision making. Kolla has a high degree of diversity in both reviews [9] and commits [10] with our largest percentage of contributions originating from unaffiliated contributors. [1] https://etherpad.openstack.org/p/kolla-manifesto [2] https://github.com/stackforge/kolla/blob/master/LICENSE [3] https://wiki.openstack.org/wiki/Kolla/PTL_Elections_March_2015 [4] http://eavesdrop.openstack.org/irclogs/%23kolla/ [5] https://wiki.openstack.org/wiki/Meetings/Kolla [6] https://review.openstack.org/#/c/178840/ [7] https://etherpad.openstack.org/p/kolla-design [8] https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/thanks-docker-the-pro-and-039s-and-con-and-039s-of-containerizing-your-openstack-services [9] http://stackalytics.com/?project_type=all&module=kolla [10] http://stackalytics.com/?project_type=all&module=kolla&metric=commits Change-Id: If6e30574235e61ea6850e364d5ac7d11bc0ee2b4
2015-07-28 14:37:28 -07:00
mission: >
To provide production-ready containers and deployment tools for operating
OpenStack clouds.
url: https://wiki.openstack.org/wiki/Kolla
deliverables:
ansible-collection-kolla:
repos:
- openstack/ansible-collection-kolla
Kolla application for Big Tent Kolla spent 4 IRC meeting hours defining the Kolla mission here [1]. Our mission is clear and well defined. Kolla provides production-ready containers and deployment tools for operating OpenStack clouds. The project uses the ASL 2.0 as declared in the code base [2]. The PTL was chosen by election to be Steven Dake using the standard OpenStack mechanism [3]. The #kolla IRC channel on freenode is logged and the channel is notified of such logging [4]. Our public meeting schedule was decided by vote at [5] and all meeting logs are stored at our wiki. The project uses public code reviews on the OpenStack infrastructure. Kolla has been using the +2/+A process requiring two unique core reviewers to review a patch since project inception. Our project does not permit exceptions to the +2/+A process and we have never approved a patch without two core reviews. Our gating runs pep8, bashate, and other YAML/JSON syntax checking. The Kolla team builds all images as a functional CI job to validate our builds. In nearly every IRC meeting for the Liberty cycle continuous integration has been our second topic of discussion. The PTL serves as liaison between the Kolla community and other projects for example coordinating our container content with James Slagle, the TripleO PTL. Because Kolla deploys most OpenStack namespaced server projects, we must coordinate and cooperate with nearly every server project in OpenStack. The project has openly worked with members of the TripleO community to integrate kolla/containers into TripleO. Several PoC patches exist demonstrating TripleO and Kolla integration [6]. During Paris ODS, we had a one hour design session recorded here [7]. The Kolla team had an open design session at Vancouver ODS during Ansible Collaboration day. TripleO hosted several Kolla design sessions at Vancouver ODS. We discuss all work in the open, mostly using IRC. Based upon open communication between the Kolla core reveiwer and developement team, the PTL presented the long-term vision for Kolla during Liberty ODS [8]. Kolla uses IRC very heavily because everyone is very active on IRC. As a result the Kolla core reviewer team often makes quick decisions. Our core reviewer team is distributed around the world, so we use the mailing list to communicate long term planning objectives. A recent example of such is the recently held Kolla-Palooza midcycle meetup on July 28th and July 29th, 2015. We use the mailing list by prefixing e-mails to openstack-dev@lists.openstack.org with [kolla] for long term planning rather than short term decision making. Kolla has a high degree of diversity in both reviews [9] and commits [10] with our largest percentage of contributions originating from unaffiliated contributors. [1] https://etherpad.openstack.org/p/kolla-manifesto [2] https://github.com/stackforge/kolla/blob/master/LICENSE [3] https://wiki.openstack.org/wiki/Kolla/PTL_Elections_March_2015 [4] http://eavesdrop.openstack.org/irclogs/%23kolla/ [5] https://wiki.openstack.org/wiki/Meetings/Kolla [6] https://review.openstack.org/#/c/178840/ [7] https://etherpad.openstack.org/p/kolla-design [8] https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/thanks-docker-the-pro-and-039s-and-con-and-039s-of-containerizing-your-openstack-services [9] http://stackalytics.com/?project_type=all&module=kolla [10] http://stackalytics.com/?project_type=all&module=kolla&metric=commits Change-Id: If6e30574235e61ea6850e364d5ac7d11bc0ee2b4
2015-07-28 14:37:28 -07:00
kolla:
repos:
- openstack/kolla
kolla-ansible:
repos:
- openstack/kolla-ansible
kayobe:
repos:
- openstack/kayobe
- openstack/kayobe-config
- openstack/kayobe-config-dev
magnum:
ptl:
name: Jake Yip
irc: jakeyip
email: jake.yip@ardc.edu.au
appointed:
- zed
irc-channel: openstack-containers
service: Container Infrastructure Management service
mission: >
To provide a set of services for provisioning, scaling, and managing
container orchestration engines.
url: https://wiki.openstack.org/wiki/Magnum
deliverables:
magnum:
repos:
- openstack/magnum
magnum-capi-helm:
repos:
- openstack/magnum-capi-helm
magnum-capi-helm-charts:
repos:
- openstack/magnum-capi-helm-charts
magnum-specs:
release-management: none
repos:
- openstack/magnum-specs
magnum-tempest-plugin:
repos:
- openstack/magnum-tempest-plugin
magnum-ui:
repos:
- openstack/magnum-ui
python-magnumclient:
repos:
- openstack/python-magnumclient
manila:
ptl:
name: Carlos Silva
irc: carloss
email: ces.eduardo98@gmail.com
irc-channel: openstack-manila
service: Shared File Systems service
mission: >
To provide a set of services for management of shared file systems
in a multitenant cloud environment, similar to how OpenStack provides
for block-based storage management through the Cinder project.
url: https://wiki.openstack.org/wiki/Manila
deliverables:
manila:
repos:
- openstack/manila
manila-image-elements:
repos:
- openstack/manila-image-elements
manila-specs:
release-management: none
repos:
- openstack/manila-specs
manila-tempest-plugin:
repos:
- openstack/manila-tempest-plugin
manila-test-image:
release-management: none
repos:
- openstack/manila-test-image
manila-ui:
repos:
- openstack/manila-ui
python-manilaclient:
repos:
- openstack/python-manilaclient
Masakari - Instances High Availability Service Mission: Provide instances high availability service for OpenStack clouds by automatically recover the instances from failures. We are mainly focus on solve the problems addressed in High Availability for Virtual Machines Development Proposal [1]. Project History: Engineers from NTT started this project on github [2] in Jul 2015 under Apache License 2.0. Newton design Summit in Austin, Masakari team and OpenStack HA team discussed how to implement this feature in OpenStack [3]. As Masakari team agreed to re-architecte masakari, we started a new project openstack/masakari [4] and re-architect and rebuild the [2] with OpenStack standards. We continually discussed this topic in OpenStack HA team [5] and we present our plane to moving into a converged upstream solution for-instances high-availability in Boston Summit [6]. Masakari is an essential component of this solution. Satisfaction of new projects requirements: Masakari projects follow the 4 opens, * Open Source: All Masakari source codes, specifications, and documentations are use Apache License 2.0. All library dependencies allow for unrestricted distribution and deployment. * Open Community: We holds weekly IRC meetings in #openstack-meeting and weekly agenda, meeting time, and link for previous meeting logs are in our wiki page[7]. We are also available in #openstack-masakari IRC channel for daily discussion. All the channels are open for anyone to interact with Masakari team. * Open Development: Masakari uses Gerrit for code reviews and Launchpad for bugs and blueprints tracking [8]. Project core team checks all the changes before merging them. The project is integrated with the Openstack CI, and runs pep8, py27, and py35 jobs. Masakari is implemented using libraries and technologies adopted by other existing OpenStack projects. * Open Design: The Masakari team has been publicly discussed project direction, issues, work items and priorities in etherpad [9-11] and in the weekly IRC meetings. We use masakari-specs [12] to discuss design specifications for all Masakari projects. We also use openstack-dev ML with prefix [masakari] for project discussion. Masakari is compatible with OpenStack APIs, and uses Keystone middleware to integrate with OpenStack Identity. The Masakari team is happy to participate in any goals specified by the TC, and meet any policies that the TC requires all projects to meet. [1] http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html [2] https://github.com/ntt-sic/masakari [3] https://etherpad.openstack.org/p/newton-instance-ha [4] https://github.com/openstack/masakari [5] https://wiki.openstack.org/wiki/Meetings/HATeamMeeting [6] https://www.openstack.org/videos/boston-2017/high-availability-for-instances-moving-to-a-converged-upstream-solution [7] https://wiki.openstack.org/wiki/Meetings/Masakari [8] https://wiki.openstack.org/wiki/Masakari#Code_and_Bug.2FFeature_trackers [9] https://etherpad.openstack.org/p/ocata-priorities-masakari [10] https://etherpad.openstack.org/p/masakari-pike-workitems [11] https://etherpad.openstack.org/p/masakari-queens-workitems [12] https://review.openstack.org/#/q/project:openstack/masakari-specs Change-Id: I99f7bfbf785f79d235a51ca8acc8810c4805694c
2017-09-02 01:09:20 +09:00
masakari:
ptl:
name: sam sue
irc: No nick supplied
email: suzhengwei@inspur.com
appointed:
- yoga
- zed
- '2023.1'
Masakari - Instances High Availability Service Mission: Provide instances high availability service for OpenStack clouds by automatically recover the instances from failures. We are mainly focus on solve the problems addressed in High Availability for Virtual Machines Development Proposal [1]. Project History: Engineers from NTT started this project on github [2] in Jul 2015 under Apache License 2.0. Newton design Summit in Austin, Masakari team and OpenStack HA team discussed how to implement this feature in OpenStack [3]. As Masakari team agreed to re-architecte masakari, we started a new project openstack/masakari [4] and re-architect and rebuild the [2] with OpenStack standards. We continually discussed this topic in OpenStack HA team [5] and we present our plane to moving into a converged upstream solution for-instances high-availability in Boston Summit [6]. Masakari is an essential component of this solution. Satisfaction of new projects requirements: Masakari projects follow the 4 opens, * Open Source: All Masakari source codes, specifications, and documentations are use Apache License 2.0. All library dependencies allow for unrestricted distribution and deployment. * Open Community: We holds weekly IRC meetings in #openstack-meeting and weekly agenda, meeting time, and link for previous meeting logs are in our wiki page[7]. We are also available in #openstack-masakari IRC channel for daily discussion. All the channels are open for anyone to interact with Masakari team. * Open Development: Masakari uses Gerrit for code reviews and Launchpad for bugs and blueprints tracking [8]. Project core team checks all the changes before merging them. The project is integrated with the Openstack CI, and runs pep8, py27, and py35 jobs. Masakari is implemented using libraries and technologies adopted by other existing OpenStack projects. * Open Design: The Masakari team has been publicly discussed project direction, issues, work items and priorities in etherpad [9-11] and in the weekly IRC meetings. We use masakari-specs [12] to discuss design specifications for all Masakari projects. We also use openstack-dev ML with prefix [masakari] for project discussion. Masakari is compatible with OpenStack APIs, and uses Keystone middleware to integrate with OpenStack Identity. The Masakari team is happy to participate in any goals specified by the TC, and meet any policies that the TC requires all projects to meet. [1] http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html [2] https://github.com/ntt-sic/masakari [3] https://etherpad.openstack.org/p/newton-instance-ha [4] https://github.com/openstack/masakari [5] https://wiki.openstack.org/wiki/Meetings/HATeamMeeting [6] https://www.openstack.org/videos/boston-2017/high-availability-for-instances-moving-to-a-converged-upstream-solution [7] https://wiki.openstack.org/wiki/Meetings/Masakari [8] https://wiki.openstack.org/wiki/Masakari#Code_and_Bug.2FFeature_trackers [9] https://etherpad.openstack.org/p/ocata-priorities-masakari [10] https://etherpad.openstack.org/p/masakari-pike-workitems [11] https://etherpad.openstack.org/p/masakari-queens-workitems [12] https://review.openstack.org/#/q/project:openstack/masakari-specs Change-Id: I99f7bfbf785f79d235a51ca8acc8810c4805694c
2017-09-02 01:09:20 +09:00
irc-channel: openstack-masakari
service: Instances High Availability service
Masakari - Instances High Availability Service Mission: Provide instances high availability service for OpenStack clouds by automatically recover the instances from failures. We are mainly focus on solve the problems addressed in High Availability for Virtual Machines Development Proposal [1]. Project History: Engineers from NTT started this project on github [2] in Jul 2015 under Apache License 2.0. Newton design Summit in Austin, Masakari team and OpenStack HA team discussed how to implement this feature in OpenStack [3]. As Masakari team agreed to re-architecte masakari, we started a new project openstack/masakari [4] and re-architect and rebuild the [2] with OpenStack standards. We continually discussed this topic in OpenStack HA team [5] and we present our plane to moving into a converged upstream solution for-instances high-availability in Boston Summit [6]. Masakari is an essential component of this solution. Satisfaction of new projects requirements: Masakari projects follow the 4 opens, * Open Source: All Masakari source codes, specifications, and documentations are use Apache License 2.0. All library dependencies allow for unrestricted distribution and deployment. * Open Community: We holds weekly IRC meetings in #openstack-meeting and weekly agenda, meeting time, and link for previous meeting logs are in our wiki page[7]. We are also available in #openstack-masakari IRC channel for daily discussion. All the channels are open for anyone to interact with Masakari team. * Open Development: Masakari uses Gerrit for code reviews and Launchpad for bugs and blueprints tracking [8]. Project core team checks all the changes before merging them. The project is integrated with the Openstack CI, and runs pep8, py27, and py35 jobs. Masakari is implemented using libraries and technologies adopted by other existing OpenStack projects. * Open Design: The Masakari team has been publicly discussed project direction, issues, work items and priorities in etherpad [9-11] and in the weekly IRC meetings. We use masakari-specs [12] to discuss design specifications for all Masakari projects. We also use openstack-dev ML with prefix [masakari] for project discussion. Masakari is compatible with OpenStack APIs, and uses Keystone middleware to integrate with OpenStack Identity. The Masakari team is happy to participate in any goals specified by the TC, and meet any policies that the TC requires all projects to meet. [1] http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html [2] https://github.com/ntt-sic/masakari [3] https://etherpad.openstack.org/p/newton-instance-ha [4] https://github.com/openstack/masakari [5] https://wiki.openstack.org/wiki/Meetings/HATeamMeeting [6] https://www.openstack.org/videos/boston-2017/high-availability-for-instances-moving-to-a-converged-upstream-solution [7] https://wiki.openstack.org/wiki/Meetings/Masakari [8] https://wiki.openstack.org/wiki/Masakari#Code_and_Bug.2FFeature_trackers [9] https://etherpad.openstack.org/p/ocata-priorities-masakari [10] https://etherpad.openstack.org/p/masakari-pike-workitems [11] https://etherpad.openstack.org/p/masakari-queens-workitems [12] https://review.openstack.org/#/q/project:openstack/masakari-specs Change-Id: I99f7bfbf785f79d235a51ca8acc8810c4805694c
2017-09-02 01:09:20 +09:00
mission: >
Provide instances high availability service for OpenStack
clouds by automatically recovering the instances from failures.
Masakari - Instances High Availability Service Mission: Provide instances high availability service for OpenStack clouds by automatically recover the instances from failures. We are mainly focus on solve the problems addressed in High Availability for Virtual Machines Development Proposal [1]. Project History: Engineers from NTT started this project on github [2] in Jul 2015 under Apache License 2.0. Newton design Summit in Austin, Masakari team and OpenStack HA team discussed how to implement this feature in OpenStack [3]. As Masakari team agreed to re-architecte masakari, we started a new project openstack/masakari [4] and re-architect and rebuild the [2] with OpenStack standards. We continually discussed this topic in OpenStack HA team [5] and we present our plane to moving into a converged upstream solution for-instances high-availability in Boston Summit [6]. Masakari is an essential component of this solution. Satisfaction of new projects requirements: Masakari projects follow the 4 opens, * Open Source: All Masakari source codes, specifications, and documentations are use Apache License 2.0. All library dependencies allow for unrestricted distribution and deployment. * Open Community: We holds weekly IRC meetings in #openstack-meeting and weekly agenda, meeting time, and link for previous meeting logs are in our wiki page[7]. We are also available in #openstack-masakari IRC channel for daily discussion. All the channels are open for anyone to interact with Masakari team. * Open Development: Masakari uses Gerrit for code reviews and Launchpad for bugs and blueprints tracking [8]. Project core team checks all the changes before merging them. The project is integrated with the Openstack CI, and runs pep8, py27, and py35 jobs. Masakari is implemented using libraries and technologies adopted by other existing OpenStack projects. * Open Design: The Masakari team has been publicly discussed project direction, issues, work items and priorities in etherpad [9-11] and in the weekly IRC meetings. We use masakari-specs [12] to discuss design specifications for all Masakari projects. We also use openstack-dev ML with prefix [masakari] for project discussion. Masakari is compatible with OpenStack APIs, and uses Keystone middleware to integrate with OpenStack Identity. The Masakari team is happy to participate in any goals specified by the TC, and meet any policies that the TC requires all projects to meet. [1] http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html [2] https://github.com/ntt-sic/masakari [3] https://etherpad.openstack.org/p/newton-instance-ha [4] https://github.com/openstack/masakari [5] https://wiki.openstack.org/wiki/Meetings/HATeamMeeting [6] https://www.openstack.org/videos/boston-2017/high-availability-for-instances-moving-to-a-converged-upstream-solution [7] https://wiki.openstack.org/wiki/Meetings/Masakari [8] https://wiki.openstack.org/wiki/Masakari#Code_and_Bug.2FFeature_trackers [9] https://etherpad.openstack.org/p/ocata-priorities-masakari [10] https://etherpad.openstack.org/p/masakari-pike-workitems [11] https://etherpad.openstack.org/p/masakari-queens-workitems [12] https://review.openstack.org/#/q/project:openstack/masakari-specs Change-Id: I99f7bfbf785f79d235a51ca8acc8810c4805694c
2017-09-02 01:09:20 +09:00
url: https://wiki.openstack.org/wiki/Masakari
deliverables:
masakari:
repos:
- openstack/masakari
masakari-monitors:
repos:
- openstack/masakari-monitors
masakari-specs:
release-management: none
Masakari - Instances High Availability Service Mission: Provide instances high availability service for OpenStack clouds by automatically recover the instances from failures. We are mainly focus on solve the problems addressed in High Availability for Virtual Machines Development Proposal [1]. Project History: Engineers from NTT started this project on github [2] in Jul 2015 under Apache License 2.0. Newton design Summit in Austin, Masakari team and OpenStack HA team discussed how to implement this feature in OpenStack [3]. As Masakari team agreed to re-architecte masakari, we started a new project openstack/masakari [4] and re-architect and rebuild the [2] with OpenStack standards. We continually discussed this topic in OpenStack HA team [5] and we present our plane to moving into a converged upstream solution for-instances high-availability in Boston Summit [6]. Masakari is an essential component of this solution. Satisfaction of new projects requirements: Masakari projects follow the 4 opens, * Open Source: All Masakari source codes, specifications, and documentations are use Apache License 2.0. All library dependencies allow for unrestricted distribution and deployment. * Open Community: We holds weekly IRC meetings in #openstack-meeting and weekly agenda, meeting time, and link for previous meeting logs are in our wiki page[7]. We are also available in #openstack-masakari IRC channel for daily discussion. All the channels are open for anyone to interact with Masakari team. * Open Development: Masakari uses Gerrit for code reviews and Launchpad for bugs and blueprints tracking [8]. Project core team checks all the changes before merging them. The project is integrated with the Openstack CI, and runs pep8, py27, and py35 jobs. Masakari is implemented using libraries and technologies adopted by other existing OpenStack projects. * Open Design: The Masakari team has been publicly discussed project direction, issues, work items and priorities in etherpad [9-11] and in the weekly IRC meetings. We use masakari-specs [12] to discuss design specifications for all Masakari projects. We also use openstack-dev ML with prefix [masakari] for project discussion. Masakari is compatible with OpenStack APIs, and uses Keystone middleware to integrate with OpenStack Identity. The Masakari team is happy to participate in any goals specified by the TC, and meet any policies that the TC requires all projects to meet. [1] http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html [2] https://github.com/ntt-sic/masakari [3] https://etherpad.openstack.org/p/newton-instance-ha [4] https://github.com/openstack/masakari [5] https://wiki.openstack.org/wiki/Meetings/HATeamMeeting [6] https://www.openstack.org/videos/boston-2017/high-availability-for-instances-moving-to-a-converged-upstream-solution [7] https://wiki.openstack.org/wiki/Meetings/Masakari [8] https://wiki.openstack.org/wiki/Masakari#Code_and_Bug.2FFeature_trackers [9] https://etherpad.openstack.org/p/ocata-priorities-masakari [10] https://etherpad.openstack.org/p/masakari-pike-workitems [11] https://etherpad.openstack.org/p/masakari-queens-workitems [12] https://review.openstack.org/#/q/project:openstack/masakari-specs Change-Id: I99f7bfbf785f79d235a51ca8acc8810c4805694c
2017-09-02 01:09:20 +09:00
repos:
- openstack/masakari-specs
python-masakariclient:
repos:
- openstack/python-masakariclient
masakari-dashboard:
repos:
- openstack/masakari-dashboard
mistral:
ptl:
name: Axel Vanzaghi
irc: avanzaghi
email: avanzaghi.osf@axellink.fr
appointed:
- '2024.1'
- '2025.1'
irc-channel: openstack-mistral
service: Workflow service
mission: >
Provide a simple YAML-based language to write workflows (tasks and
transition rules) and a service that allows to upload them, modify, run
them at scale and in a highly available manner, manage and monitor
workflow execution state and state of individual tasks.
url: https://wiki.openstack.org/wiki/Mistral
deliverables:
mistral:
repos:
- openstack/mistral
mistral-dashboard:
repos:
- openstack/mistral-dashboard
mistral-specs:
release-management: none
repos:
- openstack/mistral-specs
mistral-tempest-plugin:
repos:
- openstack/mistral-tempest-plugin
python-mistralclient:
repos:
- openstack/python-mistralclient
mistral-lib:
repos:
- openstack/mistral-lib
mistral-extra:
repos:
- openstack/mistral-extra
Adding Monasca to OpenStack Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant monitoring-as-a-service solution for metrics. Metrics can be published to the Monasca API, stored and queried. Alarms can be created and notifications, such as email, can be sent when alarms transition state. Support for complex event processing and logging is in progress. Monasca builds an extensible platform for advanced monitoring services that can be used by both operators and tenants to gain operational insight and visibilty, ensuring availabilty and stability. All code has been developed under an Apache 2.0 license and has no restrictions on distribution or deployment. All Monasca code is submitted and reviewed through OpenStack Gerrit [1]. The Monasca project maintains a core team that approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10]. Monasca integrates in with several OpenStack projects and services. The Monasca API uses Keystone for authentication and multi-tenancy. Oslo libraries are used by all components where applicable. Keystone middleware is used by the Monasca API. The Monasca project is in the process of integrating with Ceilometer by using the Ceilometer data collection pipeline as well as the Ceilometer API via a Ceilometer to Monasca storage driver, which will enable Monasca to consume OpenStack notifications from other OpenStack services [5]. A monitoring panel has been developer for Horizon. An integration with Heat for auto-scaling support is under active development. Monasca has been running weekly meetings from the start of the project. Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections will be held to elect new PTLs and core members. Monasca was initially discussed at the Atlanta Summit. The first Monasca mid-cycle meetup was held in August 2014 at which three companies attended. At the Paris Summit a session on Monsca was held. In addition, at the Paris Summit, there as a design summit session held to discuss areas for collaboration between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on August 7-8, 2015, and included six companies [9]. Monasca is planning on holding Monasca specific sessions at the Tokyo Summit as well as joint sessions with other OpenStack projects. Monasca is interested in developing integrations with Ceilometer, Heat, Mistral, Congress and others. There have been several local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San Francisco, CA. Monasca has an extensive set of documentation. Overall documentation and links to documentation are at the Monasca Wiki [2]. The Monasca API is documented [3]. The optional Monasca Agent is documented [4]. Monasca has several official deployment solutions available. Ansible roles are available [6]. Puppet modules are available via the openstack organization [7]. Monasca also has a turn-key development environment based on Vagrant, Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca plugin [11] for DevStack. Tempest tests for Monasca [12] are also available. Monasca is continually deployed to test and production environments off of master branch and maintains a very high level of quality. The first major release of Monasca was tagged for Kilo. The second major release of Monasca will be tagged for Liberty. [1]: https://review.openstack.org/#/q/status:open+monasca,n,z [2]: https://wiki.openstack.org/wiki/Monasca [3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/ monasca-api-spec.md [4]: https://git.openstack.org/openstack/monasca-agent [5]: https://git.openstack.org/openstack/monasca-ceilometer [6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca [7]: https://git.openstack.org/openstack/puppet-monasca [8]: https://git.openstack.org/openstack/monasca-vagrant [9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle [10]: https://bugs.launchpad.net/monasca [11]: https://git.openstack.org/openstack/monasca-api/devstack [12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
monasca:
ptl:
name: Hasan Acar
irc: No nick supplied
email: hasan.acar@tubitak.gov.tr
appointed:
- xena
- yoga
- zed
- '2023.2'
Adding Monasca to OpenStack Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant monitoring-as-a-service solution for metrics. Metrics can be published to the Monasca API, stored and queried. Alarms can be created and notifications, such as email, can be sent when alarms transition state. Support for complex event processing and logging is in progress. Monasca builds an extensible platform for advanced monitoring services that can be used by both operators and tenants to gain operational insight and visibilty, ensuring availabilty and stability. All code has been developed under an Apache 2.0 license and has no restrictions on distribution or deployment. All Monasca code is submitted and reviewed through OpenStack Gerrit [1]. The Monasca project maintains a core team that approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10]. Monasca integrates in with several OpenStack projects and services. The Monasca API uses Keystone for authentication and multi-tenancy. Oslo libraries are used by all components where applicable. Keystone middleware is used by the Monasca API. The Monasca project is in the process of integrating with Ceilometer by using the Ceilometer data collection pipeline as well as the Ceilometer API via a Ceilometer to Monasca storage driver, which will enable Monasca to consume OpenStack notifications from other OpenStack services [5]. A monitoring panel has been developer for Horizon. An integration with Heat for auto-scaling support is under active development. Monasca has been running weekly meetings from the start of the project. Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections will be held to elect new PTLs and core members. Monasca was initially discussed at the Atlanta Summit. The first Monasca mid-cycle meetup was held in August 2014 at which three companies attended. At the Paris Summit a session on Monsca was held. In addition, at the Paris Summit, there as a design summit session held to discuss areas for collaboration between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on August 7-8, 2015, and included six companies [9]. Monasca is planning on holding Monasca specific sessions at the Tokyo Summit as well as joint sessions with other OpenStack projects. Monasca is interested in developing integrations with Ceilometer, Heat, Mistral, Congress and others. There have been several local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San Francisco, CA. Monasca has an extensive set of documentation. Overall documentation and links to documentation are at the Monasca Wiki [2]. The Monasca API is documented [3]. The optional Monasca Agent is documented [4]. Monasca has several official deployment solutions available. Ansible roles are available [6]. Puppet modules are available via the openstack organization [7]. Monasca also has a turn-key development environment based on Vagrant, Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca plugin [11] for DevStack. Tempest tests for Monasca [12] are also available. Monasca is continually deployed to test and production environments off of master branch and maintains a very high level of quality. The first major release of Monasca was tagged for Kilo. The second major release of Monasca will be tagged for Liberty. [1]: https://review.openstack.org/#/q/status:open+monasca,n,z [2]: https://wiki.openstack.org/wiki/Monasca [3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/ monasca-api-spec.md [4]: https://git.openstack.org/openstack/monasca-agent [5]: https://git.openstack.org/openstack/monasca-ceilometer [6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca [7]: https://git.openstack.org/openstack/puppet-monasca [8]: https://git.openstack.org/openstack/monasca-vagrant [9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle [10]: https://bugs.launchpad.net/monasca [11]: https://git.openstack.org/openstack/monasca-api/devstack [12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
irc-channel: openstack-monasca
service: Monitoring
mission: >
To provide a multi-tenant, highly scalable, performant, fault-tolerant
monitoring-as-a-service solution for metrics, complex event processing
and logging. To build an extensible platform for advanced monitoring
services that can be used by both operators and tenants to gain
operational insight and visibility, ensuring availability and stability.
url: https://wiki.openstack.org/wiki/Monasca
deliverables:
monasca-api:
Adding Monasca to OpenStack Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant monitoring-as-a-service solution for metrics. Metrics can be published to the Monasca API, stored and queried. Alarms can be created and notifications, such as email, can be sent when alarms transition state. Support for complex event processing and logging is in progress. Monasca builds an extensible platform for advanced monitoring services that can be used by both operators and tenants to gain operational insight and visibilty, ensuring availabilty and stability. All code has been developed under an Apache 2.0 license and has no restrictions on distribution or deployment. All Monasca code is submitted and reviewed through OpenStack Gerrit [1]. The Monasca project maintains a core team that approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10]. Monasca integrates in with several OpenStack projects and services. The Monasca API uses Keystone for authentication and multi-tenancy. Oslo libraries are used by all components where applicable. Keystone middleware is used by the Monasca API. The Monasca project is in the process of integrating with Ceilometer by using the Ceilometer data collection pipeline as well as the Ceilometer API via a Ceilometer to Monasca storage driver, which will enable Monasca to consume OpenStack notifications from other OpenStack services [5]. A monitoring panel has been developer for Horizon. An integration with Heat for auto-scaling support is under active development. Monasca has been running weekly meetings from the start of the project. Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections will be held to elect new PTLs and core members. Monasca was initially discussed at the Atlanta Summit. The first Monasca mid-cycle meetup was held in August 2014 at which three companies attended. At the Paris Summit a session on Monsca was held. In addition, at the Paris Summit, there as a design summit session held to discuss areas for collaboration between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on August 7-8, 2015, and included six companies [9]. Monasca is planning on holding Monasca specific sessions at the Tokyo Summit as well as joint sessions with other OpenStack projects. Monasca is interested in developing integrations with Ceilometer, Heat, Mistral, Congress and others. There have been several local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San Francisco, CA. Monasca has an extensive set of documentation. Overall documentation and links to documentation are at the Monasca Wiki [2]. The Monasca API is documented [3]. The optional Monasca Agent is documented [4]. Monasca has several official deployment solutions available. Ansible roles are available [6]. Puppet modules are available via the openstack organization [7]. Monasca also has a turn-key development environment based on Vagrant, Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca plugin [11] for DevStack. Tempest tests for Monasca [12] are also available. Monasca is continually deployed to test and production environments off of master branch and maintains a very high level of quality. The first major release of Monasca was tagged for Kilo. The second major release of Monasca will be tagged for Liberty. [1]: https://review.openstack.org/#/q/status:open+monasca,n,z [2]: https://wiki.openstack.org/wiki/Monasca [3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/ monasca-api-spec.md [4]: https://git.openstack.org/openstack/monasca-agent [5]: https://git.openstack.org/openstack/monasca-ceilometer [6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca [7]: https://git.openstack.org/openstack/puppet-monasca [8]: https://git.openstack.org/openstack/monasca-vagrant [9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle [10]: https://bugs.launchpad.net/monasca [11]: https://git.openstack.org/openstack/monasca-api/devstack [12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
repos:
- openstack/monasca-api
monasca-log-api:
deprecated: ussuri
release-management: deprecated
repos:
- openstack/monasca-log-api
monasca-events-api:
repos:
- openstack/monasca-events-api
monasca-specs:
release-management: none
repos:
- openstack/monasca-specs
monasca-notification:
repos:
- openstack/monasca-notification
monasca-persister:
repos:
Adding Monasca to OpenStack Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant monitoring-as-a-service solution for metrics. Metrics can be published to the Monasca API, stored and queried. Alarms can be created and notifications, such as email, can be sent when alarms transition state. Support for complex event processing and logging is in progress. Monasca builds an extensible platform for advanced monitoring services that can be used by both operators and tenants to gain operational insight and visibilty, ensuring availabilty and stability. All code has been developed under an Apache 2.0 license and has no restrictions on distribution or deployment. All Monasca code is submitted and reviewed through OpenStack Gerrit [1]. The Monasca project maintains a core team that approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10]. Monasca integrates in with several OpenStack projects and services. The Monasca API uses Keystone for authentication and multi-tenancy. Oslo libraries are used by all components where applicable. Keystone middleware is used by the Monasca API. The Monasca project is in the process of integrating with Ceilometer by using the Ceilometer data collection pipeline as well as the Ceilometer API via a Ceilometer to Monasca storage driver, which will enable Monasca to consume OpenStack notifications from other OpenStack services [5]. A monitoring panel has been developer for Horizon. An integration with Heat for auto-scaling support is under active development. Monasca has been running weekly meetings from the start of the project. Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections will be held to elect new PTLs and core members. Monasca was initially discussed at the Atlanta Summit. The first Monasca mid-cycle meetup was held in August 2014 at which three companies attended. At the Paris Summit a session on Monsca was held. In addition, at the Paris Summit, there as a design summit session held to discuss areas for collaboration between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on August 7-8, 2015, and included six companies [9]. Monasca is planning on holding Monasca specific sessions at the Tokyo Summit as well as joint sessions with other OpenStack projects. Monasca is interested in developing integrations with Ceilometer, Heat, Mistral, Congress and others. There have been several local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San Francisco, CA. Monasca has an extensive set of documentation. Overall documentation and links to documentation are at the Monasca Wiki [2]. The Monasca API is documented [3]. The optional Monasca Agent is documented [4]. Monasca has several official deployment solutions available. Ansible roles are available [6]. Puppet modules are available via the openstack organization [7]. Monasca also has a turn-key development environment based on Vagrant, Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca plugin [11] for DevStack. Tempest tests for Monasca [12] are also available. Monasca is continually deployed to test and production environments off of master branch and maintains a very high level of quality. The first major release of Monasca was tagged for Kilo. The second major release of Monasca will be tagged for Liberty. [1]: https://review.openstack.org/#/q/status:open+monasca,n,z [2]: https://wiki.openstack.org/wiki/Monasca [3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/ monasca-api-spec.md [4]: https://git.openstack.org/openstack/monasca-agent [5]: https://git.openstack.org/openstack/monasca-ceilometer [6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca [7]: https://git.openstack.org/openstack/puppet-monasca [8]: https://git.openstack.org/openstack/monasca-vagrant [9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle [10]: https://bugs.launchpad.net/monasca [11]: https://git.openstack.org/openstack/monasca-api/devstack [12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
- openstack/monasca-persister
monasca-tempest-plugin:
repos:
- openstack/monasca-tempest-plugin
monasca-thresh:
repos:
Adding Monasca to OpenStack Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant monitoring-as-a-service solution for metrics. Metrics can be published to the Monasca API, stored and queried. Alarms can be created and notifications, such as email, can be sent when alarms transition state. Support for complex event processing and logging is in progress. Monasca builds an extensible platform for advanced monitoring services that can be used by both operators and tenants to gain operational insight and visibilty, ensuring availabilty and stability. All code has been developed under an Apache 2.0 license and has no restrictions on distribution or deployment. All Monasca code is submitted and reviewed through OpenStack Gerrit [1]. The Monasca project maintains a core team that approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10]. Monasca integrates in with several OpenStack projects and services. The Monasca API uses Keystone for authentication and multi-tenancy. Oslo libraries are used by all components where applicable. Keystone middleware is used by the Monasca API. The Monasca project is in the process of integrating with Ceilometer by using the Ceilometer data collection pipeline as well as the Ceilometer API via a Ceilometer to Monasca storage driver, which will enable Monasca to consume OpenStack notifications from other OpenStack services [5]. A monitoring panel has been developer for Horizon. An integration with Heat for auto-scaling support is under active development. Monasca has been running weekly meetings from the start of the project. Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections will be held to elect new PTLs and core members. Monasca was initially discussed at the Atlanta Summit. The first Monasca mid-cycle meetup was held in August 2014 at which three companies attended. At the Paris Summit a session on Monsca was held. In addition, at the Paris Summit, there as a design summit session held to discuss areas for collaboration between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on August 7-8, 2015, and included six companies [9]. Monasca is planning on holding Monasca specific sessions at the Tokyo Summit as well as joint sessions with other OpenStack projects. Monasca is interested in developing integrations with Ceilometer, Heat, Mistral, Congress and others. There have been several local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San Francisco, CA. Monasca has an extensive set of documentation. Overall documentation and links to documentation are at the Monasca Wiki [2]. The Monasca API is documented [3]. The optional Monasca Agent is documented [4]. Monasca has several official deployment solutions available. Ansible roles are available [6]. Puppet modules are available via the openstack organization [7]. Monasca also has a turn-key development environment based on Vagrant, Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca plugin [11] for DevStack. Tempest tests for Monasca [12] are also available. Monasca is continually deployed to test and production environments off of master branch and maintains a very high level of quality. The first major release of Monasca was tagged for Kilo. The second major release of Monasca will be tagged for Liberty. [1]: https://review.openstack.org/#/q/status:open+monasca,n,z [2]: https://wiki.openstack.org/wiki/Monasca [3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/ monasca-api-spec.md [4]: https://git.openstack.org/openstack/monasca-agent [5]: https://git.openstack.org/openstack/monasca-ceilometer [6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca [7]: https://git.openstack.org/openstack/puppet-monasca [8]: https://git.openstack.org/openstack/monasca-vagrant [9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle [10]: https://bugs.launchpad.net/monasca [11]: https://git.openstack.org/openstack/monasca-api/devstack [12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
- openstack/monasca-thresh
monasca-common:
repos:
Adding Monasca to OpenStack Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant monitoring-as-a-service solution for metrics. Metrics can be published to the Monasca API, stored and queried. Alarms can be created and notifications, such as email, can be sent when alarms transition state. Support for complex event processing and logging is in progress. Monasca builds an extensible platform for advanced monitoring services that can be used by both operators and tenants to gain operational insight and visibilty, ensuring availabilty and stability. All code has been developed under an Apache 2.0 license and has no restrictions on distribution or deployment. All Monasca code is submitted and reviewed through OpenStack Gerrit [1]. The Monasca project maintains a core team that approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10]. Monasca integrates in with several OpenStack projects and services. The Monasca API uses Keystone for authentication and multi-tenancy. Oslo libraries are used by all components where applicable. Keystone middleware is used by the Monasca API. The Monasca project is in the process of integrating with Ceilometer by using the Ceilometer data collection pipeline as well as the Ceilometer API via a Ceilometer to Monasca storage driver, which will enable Monasca to consume OpenStack notifications from other OpenStack services [5]. A monitoring panel has been developer for Horizon. An integration with Heat for auto-scaling support is under active development. Monasca has been running weekly meetings from the start of the project. Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections will be held to elect new PTLs and core members. Monasca was initially discussed at the Atlanta Summit. The first Monasca mid-cycle meetup was held in August 2014 at which three companies attended. At the Paris Summit a session on Monsca was held. In addition, at the Paris Summit, there as a design summit session held to discuss areas for collaboration between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on August 7-8, 2015, and included six companies [9]. Monasca is planning on holding Monasca specific sessions at the Tokyo Summit as well as joint sessions with other OpenStack projects. Monasca is interested in developing integrations with Ceilometer, Heat, Mistral, Congress and others. There have been several local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San Francisco, CA. Monasca has an extensive set of documentation. Overall documentation and links to documentation are at the Monasca Wiki [2]. The Monasca API is documented [3]. The optional Monasca Agent is documented [4]. Monasca has several official deployment solutions available. Ansible roles are available [6]. Puppet modules are available via the openstack organization [7]. Monasca also has a turn-key development environment based on Vagrant, Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca plugin [11] for DevStack. Tempest tests for Monasca [12] are also available. Monasca is continually deployed to test and production environments off of master branch and maintains a very high level of quality. The first major release of Monasca was tagged for Kilo. The second major release of Monasca will be tagged for Liberty. [1]: https://review.openstack.org/#/q/status:open+monasca,n,z [2]: https://wiki.openstack.org/wiki/Monasca [3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/ monasca-api-spec.md [4]: https://git.openstack.org/openstack/monasca-agent [5]: https://git.openstack.org/openstack/monasca-ceilometer [6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca [7]: https://git.openstack.org/openstack/puppet-monasca [8]: https://git.openstack.org/openstack/monasca-vagrant [9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle [10]: https://bugs.launchpad.net/monasca [11]: https://git.openstack.org/openstack/monasca-api/devstack [12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
- openstack/monasca-common
monasca-ui:
repos:
- openstack/monasca-ui
python-monascaclient:
repos:
- openstack/python-monascaclient
monasca-agent:
repos:
- openstack/monasca-agent
monasca-statsd:
repos:
- openstack/monasca-statsd
monasca-ceilometer:
deprecated: ussuri
release-management: deprecated
Adding Monasca to OpenStack Monasca provides a multi-tenant, highly scalable, performant, fault-tolerant monitoring-as-a-service solution for metrics. Metrics can be published to the Monasca API, stored and queried. Alarms can be created and notifications, such as email, can be sent when alarms transition state. Support for complex event processing and logging is in progress. Monasca builds an extensible platform for advanced monitoring services that can be used by both operators and tenants to gain operational insight and visibilty, ensuring availabilty and stability. All code has been developed under an Apache 2.0 license and has no restrictions on distribution or deployment. All Monasca code is submitted and reviewed through OpenStack Gerrit [1]. The Monasca project maintains a core team that approves all changes. Bugs are filed, reviewed and tracked in Launchpad [10]. Monasca integrates in with several OpenStack projects and services. The Monasca API uses Keystone for authentication and multi-tenancy. Oslo libraries are used by all components where applicable. Keystone middleware is used by the Monasca API. The Monasca project is in the process of integrating with Ceilometer by using the Ceilometer data collection pipeline as well as the Ceilometer API via a Ceilometer to Monasca storage driver, which will enable Monasca to consume OpenStack notifications from other OpenStack services [5]. A monitoring panel has been developer for Horizon. An integration with Heat for auto-scaling support is under active development. Monasca has been running weekly meetings from the start of the project. Meetings are held on Tuesday's at 9:00 AM MST and are open to anyone that wants to attend. Currently, the Monasca PTL is Roland Hochmuth. Regular elections will be held to elect new PTLs and core members. Monasca was initially discussed at the Atlanta Summit. The first Monasca mid-cycle meetup was held in August 2014 at which three companies attended. At the Paris Summit a session on Monsca was held. In addition, at the Paris Summit, there as a design summit session held to discuss areas for collaboration between Ceilometer and Monasca. A Monasca Liberty mid-cycle meetup was held on August 7-8, 2015, and included six companies [9]. Monasca is planning on holding Monasca specific sessions at the Tokyo Summit as well as joint sessions with other OpenStack projects. Monasca is interested in developing integrations with Ceilometer, Heat, Mistral, Congress and others. There have been several local meetups on Monasca in 2015, including Austin, TX, Boulder, CO, and San Francisco, CA. Monasca has an extensive set of documentation. Overall documentation and links to documentation are at the Monasca Wiki [2]. The Monasca API is documented [3]. The optional Monasca Agent is documented [4]. Monasca has several official deployment solutions available. Ansible roles are available [6]. Puppet modules are available via the openstack organization [7]. Monasca also has a turn-key development environment based on Vagrant, Devstack and Ansible [8]. Monasca integrates with DevStack via a Monasca plugin [11] for DevStack. Tempest tests for Monasca [12] are also available. Monasca is continually deployed to test and production environments off of master branch and maintains a very high level of quality. The first major release of Monasca was tagged for Kilo. The second major release of Monasca will be tagged for Liberty. [1]: https://review.openstack.org/#/q/status:open+monasca,n,z [2]: https://wiki.openstack.org/wiki/Monasca [3]: http://git.openstack.org/cgit/openstack/monasca-api/tree/docs/ monasca-api-spec.md [4]: https://git.openstack.org/openstack/monasca-agent [5]: https://git.openstack.org/openstack/monasca-ceilometer [6]: https://github.com/search?utf8=%E2%9C%93&q=ansible-monasca [7]: https://git.openstack.org/openstack/puppet-monasca [8]: https://git.openstack.org/openstack/monasca-vagrant [9]: https://etherpad.openstack.org/p/monasca_liberty_mid_cycle [10]: https://bugs.launchpad.net/monasca [11]: https://git.openstack.org/openstack/monasca-api/devstack [12]: https://git.openstack.org/openstack/monasca-api/monasca_tempest_tests Change-Id: I04eeb7651167ca2712f525af3f5b2b5d45dacb5f
2015-08-14 08:55:56 -06:00
repos:
- openstack/monasca-ceilometer
monasca-transform:
deprecated: victoria
release-management: deprecated
repos:
- openstack/monasca-transform
monasca-grafana-datasource:
repos:
- openstack/monasca-grafana-datasource
monasca-kibana-plugin:
repos:
- openstack/monasca-kibana-plugin
neutron:
ptl:
name: Brian Haley
irc: haleyb
email: haleyb.dev@gmail.com
irc-channel: openstack-neutron
service: Networking service
mission: >
To implement services and associated libraries to provide on-demand,
scalable, and technology-agnostic network abstraction.
url: https://wiki.openstack.org/wiki/Neutron
deliverables:
networking-bagpipe:
repos:
- openstack/networking-bagpipe
networking-bgpvpn:
repos:
- openstack/networking-bgpvpn
networking-midonet:
deprecated: wallaby
release-management: deprecated
repos:
- openstack/networking-midonet
networking-odl:
deprecated: '2023.2'
release-management: deprecated
repos:
- openstack/networking-odl
networking-sfc:
repos:
- openstack/networking-sfc
neutron-fwaas:
repos:
- openstack/neutron-fwaas
neutron:
repos:
- openstack/neutron
neutron-dynamic-routing:
repos:
- openstack/neutron-dynamic-routing
neutron-lib:
repos:
- openstack/neutron-lib
neutron-specs:
release-management: none
repos:
- openstack/neutron-specs
neutron-tempest-plugin:
repos:
- openstack/neutron-tempest-plugin
ovn-octavia-provider:
repos:
- openstack/ovn-octavia-provider
ovsdbapp:
repos:
- openstack/ovsdbapp
python-neutronclient:
repos:
- openstack/python-neutronclient
neutron-fwaas-dashboard:
repos:
- openstack/neutron-fwaas-dashboard
neutron-vpnaas:
repos:
- openstack/neutron-vpnaas
neutron-vpnaas-dashboard:
repos:
- openstack/neutron-vpnaas-dashboard
os-ken:
repos:
- openstack/os-ken
tap-as-a-service:
repos:
- openstack/tap-as-a-service
ovn-bgp-agent:
repos:
- openstack/ovn-bgp-agent
nova:
ptl:
name: Sylvain Bauza
irc: bauzas
email: sbauza@redhat.com
irc-channel: openstack-nova
service: Compute service
mission: >
To implement services and associated libraries to provide massively
scalable, on demand, self service access to compute resources, including
bare metal, virtual machines, and containers.
url: https://wiki.openstack.org/wiki/Nova
deliverables:
nova:
repos:
- openstack/nova
nova-specs:
release-management: none
repos:
- openstack/nova-specs
python-novaclient:
repos:
- openstack/python-novaclient
os-vif:
repos:
- openstack/os-vif
placement:
repos:
- openstack/placement
os-traits:
repos:
- openstack/os-traits
osc-placement:
repos:
- openstack/osc-placement
os-resource-classes:
repos:
- openstack/os-resource-classes
octavia:
ptl:
name: Gregory Thiemonge
irc: gthiemonge
email: gthiemon@redhat.com
appointed:
- wallaby
irc-channel: openstack-lbaas
service: Load-balancer service
mission: >
To provide scalable, on demand, self service access to load-balancer
services, in technology-agnostic manner.
url: https://wiki.openstack.org/wiki/Octavia
deliverables:
octavia:
repos:
- openstack/octavia
octavia-dashboard:
repos:
- openstack/octavia-dashboard
octavia-tempest-plugin:
repos:
- openstack/octavia-tempest-plugin
python-octaviaclient:
repos:
- openstack/python-octaviaclient
octavia-lib:
repos:
- openstack/octavia-lib
OpenStack Charms:
ptl:
name: James Page
irc: jamespage
email: james.page@canonical.com
appointed:
- wallaby
- zed
- '2023.1'
- '2023.2'
- '2024.2'
irc-channel: openstack-charms
service: Juju Charms for deployment of OpenStack
mission: >
Develop and maintain Juju Charms for deploying and managing
OpenStack services.
url: https://docs.openstack.org/charm-guide/latest/
deliverables:
charms.ceph:
release-management: external
repos:
- openstack/charms.ceph
charms.openstack:
release-management: external
repos:
- openstack/charms.openstack
charm-aodh:
release-management: external
repos:
- openstack/charm-aodh
charm-barbican:
release-management: external
repos:
- openstack/charm-barbican
charm-barbican-softhsm:
release-management: external
repos:
- openstack/charm-barbican-softhsm
charm-barbican-vault:
release-management: external
repos:
- openstack/charm-barbican-vault
charm-ceilometer:
release-management: external
repos:
- openstack/charm-ceilometer
charm-ceilometer-agent:
release-management: external
repos:
- openstack/charm-ceilometer-agent
charm-ceph-dashboard:
release-management: external
repos:
- openstack/charm-ceph-dashboard
charm-ceph-iscsi:
release-management: external
repos:
- openstack/charm-ceph-iscsi
charm-ceph-mon:
release-management: external
repos:
- openstack/charm-ceph-mon
charm-ceph-nfs:
release-management: external
repos:
- openstack/charm-ceph-nfs
charm-ceph-osd:
release-management: external
repos:
- openstack/charm-ceph-osd
charm-ceph-fs:
release-management: external
repos:
- openstack/charm-ceph-fs
charm-ceph-radosgw:
release-management: external
repos:
- openstack/charm-ceph-radosgw
charm-ceph-rbd-mirror:
release-management: external
repos:
- openstack/charm-ceph-rbd-mirror
charm-ceph-proxy:
release-management: external
repos:
- openstack/charm-ceph-proxy
charm-cinder:
release-management: external
repos:
- openstack/charm-cinder
charm-cinder-backup:
release-management: external
repos:
- openstack/charm-cinder-backup
charm-cinder-backup-swift-proxy:
release-management: external
repos:
- openstack/charm-cinder-backup-swift-proxy
charm-cinder-ceph:
release-management: external
repos:
- openstack/charm-cinder-ceph
charm-cinder-lvm:
release-management: external
repos:
- openstack/charm-cinder-lvm
charm-cinder-netapp:
release-management: external
repos:
- openstack/charm-cinder-netapp
charm-cinder-nfs:
release-management: external
repos:
- openstack/charm-cinder-nfs
charm-cinder-nimblestorage:
release-management: external
repos:
- openstack/charm-cinder-nimblestorage
charm-cinder-purestorage:
release-management: external
repos:
- openstack/charm-cinder-purestorage
charm-cinder-solidfire:
release-management: external
repos:
- openstack/charm-cinder-solidfire
charm-cinder-three-par:
release-management: external
repos:
- openstack/charm-cinder-three-par
charm-cinder-dell-emc-powerstore:
release-management: external
repos:
- openstack/charm-cinder-dell-emc-powerstore
charm-cinder-ibm-storwize-svc:
release-management: external
repos:
- openstack/charm-cinder-ibm-storwize-svc
charm-cinder-infinidat:
release-management: external
repos:
- openstack/charm-cinder-infinidat
charm-cloudkitty:
release-management: external
repos:
- openstack/charm-cloudkitty
charm-deployment-guide:
release-management: external
repos:
- openstack/charm-deployment-guide
charm-designate:
release-management: external
repos:
- openstack/charm-designate
charm-designate-bind:
release-management: external
repos:
- openstack/charm-designate-bind
charm-gnocchi:
release-management: external
repos:
- openstack/charm-gnocchi
charm-placement:
release-management: external
repos:
- openstack/charm-placement
charm-glance:
release-management: external
repos:
- openstack/charm-glance
charm-glance-simplestreams-sync:
release-management: external
repos:
- openstack/charm-glance-simplestreams-sync
charm-guide:
release-management: none
repos:
- openstack/charm-guide
charm-hacluster:
release-management: external
repos:
- openstack/charm-hacluster
charm-heat:
release-management: external
repos:
- openstack/charm-heat
charm-infinidat-tools:
release-management: external
repos:
- openstack/charm-infinidat-tools
charm-interface-barbican-secrets:
release-management: external
repos:
- openstack/charm-interface-barbican-secrets
charm-interface-bgp:
release-management: external
repos:
- openstack/charm-interface-bgp
charm-interface-bind-rndc:
release-management: external
repos:
- openstack/charm-interface-bind-rndc
charm-interface-ceph-client:
release-management: external
repos:
- openstack/charm-interface-ceph-client
charm-interface-ceph-mds:
release-management: external
repos:
- openstack/charm-interface-ceph-mds
charm-interface-ceph-rbd-mirror:
release-management: external
repos:
- openstack/charm-interface-ceph-rbd-mirror
charm-interface-cinder-backend:
release-management: external
repos:
- openstack/charm-interface-cinder-backend
charm-interface-cinder-backup:
release-management: external
repos:
- openstack/charm-interface-cinder-backup
charm-interface-dashboard-plugin:
release-management: external
repos:
- openstack/charm-interface-dashboard-plugin
charm-interface-designate:
release-management: external
repos:
- openstack/charm-interface-designate
charm-interface-gnocchi:
release-management: external
repos:
- openstack/charm-interface-gnocchi
charm-interface-hacluster:
release-management: external
repos:
- openstack/charm-interface-hacluster
charm-interface-ironic-api:
release-management: external
repos:
- openstack/charm-interface-ironic-api
charm-interface-keystone:
release-management: external
repos:
- openstack/charm-interface-keystone
charm-interface-keystone-admin:
release-management: external
repos:
- openstack/charm-interface-keystone-admin
charm-interface-keystone-credentials:
release-management: external
repos:
- openstack/charm-interface-keystone-credentials
charm-interface-keystone-domain-backend:
release-management: external
repos:
- openstack/charm-interface-keystone-domain-backend
charm-interface-keystone-fid-service-provider:
release-management: external
repos:
- openstack/charm-interface-keystone-fid-service-provider
charm-interface-keystone-notifications:
release-management: external
repos:
- openstack/charm-interface-keystone-notifications
charm-interface-magpie:
release-management: external
repos:
- openstack/charm-interface-magpie
charm-interface-manila-plugin:
release-management: external
repos:
- openstack/charm-interface-manila-plugin
charm-interface-mysql-innodb-cluster:
release-management: external
repos:
- openstack/charm-interface-mysql-innodb-cluster
charm-interface-mysql-router:
release-management: external
repos:
- openstack/charm-interface-mysql-router
charm-interface-mysql-shared:
release-management: external
repos:
- openstack/charm-interface-mysql-shared
charm-interface-neutron-load-balancer:
release-management: external
repos:
- openstack/charm-interface-neutron-load-balancer
charm-interface-nova-cell:
release-management: external
repos:
- openstack/charm-interface-nova-cell
charm-interface-nova-compute:
release-management: external
repos:
- openstack/charm-interface-nova-compute
charm-interface-neutron-plugin:
release-management: external
repos:
- openstack/charm-interface-neutron-plugin
charm-interface-neutron-plugin-api-subordinate:
release-management: external
repos:
- openstack/charm-interface-neutron-plugin-api-subordinate
charm-interface-odl-controller-api:
release-management: external
repos:
- openstack/charm-interface-odl-controller-api
charm-interface-openstack-ha:
release-management: external
repos:
- openstack/charm-interface-openstack-ha
charm-interface-ovsdb-manager:
release-management: external
repos:
- openstack/charm-interface-ovsdb-manager
charm-interface-pacemaker-remote:
release-management: external
repos:
- openstack/charm-interface-pacemaker-remote
charm-interface-placement:
release-management: external
repos:
- openstack/charm-interface-placement
charm-interface-rabbitmq:
release-management: external
repos:
- openstack/charm-interface-rabbitmq
charm-interface-service-control:
release-management: external
repos:
- openstack/charm-interface-service-control
charm-interface-websso-fid-service-provider:
release-management: external
repos:
- openstack/charm-interface-websso-fid-service-provider
charm-ironic:
release-management: external
repos:
- openstack/charm-ironic
charm-ironic-api:
release-management: external
repos:
- openstack/charm-ironic-api
charm-ironic-conductor:
release-management: external
repos:
- openstack/charm-ironic-conductor
charm-ironic-dashboard:
release-management: external
repos:
- openstack/charm-ironic-dashboard
charm-keystone:
release-management: external
repos:
- openstack/charm-keystone
charm-keystone-kerberos:
release-management: external
repos:
- openstack/charm-keystone-kerberos
charm-keystone-ldap:
release-management: external
repos:
- openstack/charm-keystone-ldap
charm-keystone-openidc:
release-management: external
repos:
- openstack/charm-keystone-openidc
charm-keystone-saml-mellon:
release-management: external
repos:
- openstack/charm-keystone-saml-mellon
charm-layer-ceph:
release-management: external
repos:
- openstack/charm-layer-ceph
charm-layer-ceph-base:
release-management: external
repos:
- openstack/charm-layer-ceph-base
charm-layer-openstack:
release-management: external
repos:
- openstack/charm-layer-openstack
charm-layer-openstack-api:
release-management: external
repos:
- openstack/charm-layer-openstack-api
charm-layer-openstack-principle:
release-management: external
repos:
- openstack/charm-layer-openstack-principle
charm-magnum:
release-management: external
repos:
- openstack/charm-magnum
charm-magnum-dashboard:
release-management: external
repos:
- openstack/charm-magnum-dashboard
charm-magpie:
release-management: external
repos:
- openstack/charm-magpie
charm-manila:
release-management: external
repos:
- openstack/charm-manila
charm-manila-dashboard:
release-management: external
repos:
- openstack/charm-manila-dashboard
charm-manila-flashblade:
release-management: external
repos:
- openstack/charm-manila-flashblade
charm-manila-ganesha:
release-management: external
repos:
- openstack/charm-manila-ganesha
charm-manila-generic:
release-management: external
repos:
- openstack/charm-manila-generic
charm-manila-netapp:
release-management: external
repos:
- openstack/charm-manila-netapp
charm-manila-infinidat:
release-management: external
repos:
- openstack/charm-manila-infinidat
charm-masakari:
release-management: external
repos:
- openstack/charm-masakari
charm-masakari-monitors:
release-management: external
repos:
- openstack/charm-masakari-monitors
charm-mysql-innodb-cluster:
release-management: external
repos:
- openstack/charm-mysql-innodb-cluster
charm-mysql-router:
release-management: external
repos:
- openstack/charm-mysql-router
charm-neutron-api:
release-management: external
repos:
- openstack/charm-neutron-api
charm-neutron-api-plugin-arista:
release-management: external
repos:
- openstack/charm-neutron-api-plugin-arista
charm-neutron-api-plugin-ironic:
release-management: external
repos:
- openstack/charm-neutron-api-plugin-ironic
charm-neutron-api-plugin-ovn:
release-management: external
repos:
- openstack/charm-neutron-api-plugin-ovn
charm-neutron-dynamic-routing:
release-management: external
repos:
- openstack/charm-neutron-dynamic-routing
charm-neutron-gateway:
release-management: external
repos:
- openstack/charm-neutron-gateway
charm-neutron-openvswitch:
release-management: external
repos:
- openstack/charm-neutron-openvswitch
charm-nova-cell-controller:
release-management: external
repos:
- openstack/charm-nova-cell-controller
charm-nova-cloud-controller:
release-management: external
repos:
- openstack/charm-nova-cloud-controller
charm-nova-compute:
release-management: external
repos:
- openstack/charm-nova-compute
charm-nova-compute-nvidia-vgpu:
release-management: external
repos:
- openstack/charm-nova-compute-nvidia-vgpu
charm-nova-compute-proxy:
release-management: external
repos:
- openstack/charm-nova-compute-proxy
charm-octavia:
release-management: external
repos:
- openstack/charm-octavia
charm-octavia-dashboard:
release-management: external
repos:
- openstack/charm-octavia-dashboard
charm-octavia-diskimage-retrofit:
release-management: external
repos:
- openstack/charm-octavia-diskimage-retrofit
charm-openstack-dashboard:
release-management: external
repos:
- openstack/charm-openstack-dashboard
charm-openstack-loadbalancer:
release-management: external
repos:
- openstack/charm-openstack-loadbalancer
charm-ops-interface-ceph-client:
release-management: external
repos:
- openstack/charm-ops-interface-ceph-client
charm-ops-interface-ceph-iscsi-admin-access:
release-management: external
repos:
- openstack/charm-ops-interface-ceph-iscsi-admin-access
charm-ops-interface-openstack-loadbalancer:
release-management: external
repos:
- openstack/charm-ops-interface-openstack-loadbalancer
charm-ops-interface-tls-certificates:
release-management: external
repos:
- openstack/charm-ops-interface-tls-certificates
charm-ops-openstack:
release-management: external
repos:
- openstack/charm-ops-openstack
charm-pacemaker-remote:
release-management: external
repos:
- openstack/charm-pacemaker-remote
charm-percona-cluster:
release-management: external
repos:
- openstack/charm-percona-cluster
charm-rabbitmq-server:
release-management: external
repos:
- openstack/charm-rabbitmq-server
charm-specs:
release-management: none
repos:
- openstack/charm-specs
charm-swift-proxy:
release-management: external
repos:
- openstack/charm-swift-proxy
charm-swift-storage:
release-management: external
repos:
- openstack/charm-swift-storage
charm-tempest:
release-management: external
repos:
- openstack/charm-tempest
charm-trilio-data-mover:
release-management: external
repos:
- openstack/charm-trilio-data-mover
charm-trilio-dm-api:
release-management: external
repos:
- openstack/charm-trilio-dm-api
charm-trilio-horizon-plugin:
release-management: external
repos:
- openstack/charm-trilio-horizon-plugin
charm-trilio-wlm:
release-management: external
repos:
- openstack/charm-trilio-wlm
charm-vault:
release-management: external
repos:
- openstack/charm-vault
charm-watcher:
release-management: external
repos:
- openstack/charm-watcher
charm-watcher-dashboard:
release-management: external
repos:
- openstack/charm-watcher-dashboard
charm-zuul-jobs:
release-management: external
repos:
- openstack/charm-zuul-jobs
OpenStack-Helm - Helm charts for OpenStack The mission of OpenStack-Helm[1] is to provide a collection of Helm charts that simply, resiliently, and flexibly deploy OpenStack and related services on Kubernetes. It allows the Helm client to be used directly to deploy OpenStack components, and as such is comarable to OpenStack-Ansible (providing playbooks for Ansible) or Puppet OpenStack (providing modules for Puppet). The project leverages native Kubernetes constructs and allows OpenStack projects to be installed in combinations and configurations per unique operator use cases. OpenStack-Helm has a goal to facilitate pushbutton automated deployments, and rolling upgrades with no impact to tenant workloads. There is a one-to-one correspondence between OpenStack projects and Helm charts, and charts are fully independent of one another. This allows for use cases from traditional OpenStack clouds, to deployment of standalone components, and combinations in between. The project provides reasonable, usable configuration by default, and allows for deployment-time overrides of individual values or a chart's entire values config. Dependent service endpoints (e.g. the Keystone API) default to Kubernetes cluster-internal DNS names, but can be overridden to point at external instances. Multiple, alternate SDN and SDS solutions will be integrated, and can be configured at deploy-time. The choice of container images is configurable as well, and can be mixed and matched on a chart-by-chart basis per operator needs. The project aims to be production-grade out-of-box, including configuration for highly available services, resilient storage for the control plane (e.g. Ceph), and security features such as cluster-internal TLS. OpenStack-Helm will follow the OpenStack independent release model[2], and will ensure Helm charts integrate with an OpenStack release once it is released. The team plans to change to cycle-trailing releases when ready, but will need to close the gap between Newton (current-state) and master first. In the meantime, it will model the cycle-trailing branch strategy as it catches up. The team is interested in feedback on this plan and advice from the TC. A number of OpenStack deployment projects pave the way for OpenStack-Helm, including OpenStack Ansible[3], Kolla-Kubernetes[4], OpenStack-Chef[5], and Puppet OpenStack [6]. OpenStack-Helm occupies a unique space in its exclusive use of Kubernetes-native constructs during the deployment process, including Helm packaging and gotpl templating. This tradeoff enables it to be as lightweight and simple as possible, at the expense of operator choice of deployment toolsets. The project team is eager to work with other OpenStack deployment projects to solve common needs and share learnings. The OpenStack-Helm project and the team follow the Four Opens: * Open Source: The project source code[7], specifications[8] and documentation[9] are open source and are developed in the community under the Apache 2.0 license. * Open Design: Design decisions are discussed and formalized via specifications, as well as openly discussed in the IRC chat room[10] and the team meetings[11]. The team had informal meetings at the Atlanta and Denver PTGs, as well as a presentation[12], hands-on workshop, and a related cross-project Forum at the Boston Summit. More of each are planned for Sydney. * Open Development: All OpenStack-Helm code is reviewed and hosted in OpenStack Infra[7]. blueprints and bugs are tracked by Launchpad[13]. Quality gates are run to ensure OpenStack deploys and runs with every patch set, and integration of standard Rally and Tempest tests is underway. * Open Community: The project team and core reviewer team[14] represent multiple employers, and are eager to groom new team members. Project discussions and Q&A are frequently held in the public IRC chat room, and all are welcome. 1. https://wiki.openstack.org/wiki/Openstack-helm 2. https://releases.openstack.org/reference/release_models.html 3. https://wiki.openstack.org/wiki/OpenStackAnsible 4. https://wiki.openstack.org/wiki/Kolla 5. https://wiki.openstack.org/wiki/Chef 6. https://docs.openstack.org/puppet-openstack-guide/latest/ 7. https://git.openstack.org/cgit/openstack/openstack-helm/ 8. https://git.openstack.org/cgit/openstack/openstack-helm/tree/doc/source/specs 9. http://openstack-helm.readthedocs.io/ 10. http://eavesdrop.openstack.org/irclogs/%23openstack-helm/ 11. http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-5/%23openstack-meeting-5.2017-10-03.log.html 12. https://www.openstack.org/videos/search?search=openstack-helm 13. https://launchpad.net/openstack-helm 14. https://review.openstack.org/#/admin/groups/1749,members Change-Id: I70995640b6b760e7bfd7f1ec407b8004c855203f
2017-10-11 13:49:05 -05:00
OpenStack-Helm:
ptl:
name: Vladimir Kozhukalov
irc: kozhukalov
email: kozhukalov@gmail.com
OpenStack-Helm - Helm charts for OpenStack The mission of OpenStack-Helm[1] is to provide a collection of Helm charts that simply, resiliently, and flexibly deploy OpenStack and related services on Kubernetes. It allows the Helm client to be used directly to deploy OpenStack components, and as such is comarable to OpenStack-Ansible (providing playbooks for Ansible) or Puppet OpenStack (providing modules for Puppet). The project leverages native Kubernetes constructs and allows OpenStack projects to be installed in combinations and configurations per unique operator use cases. OpenStack-Helm has a goal to facilitate pushbutton automated deployments, and rolling upgrades with no impact to tenant workloads. There is a one-to-one correspondence between OpenStack projects and Helm charts, and charts are fully independent of one another. This allows for use cases from traditional OpenStack clouds, to deployment of standalone components, and combinations in between. The project provides reasonable, usable configuration by default, and allows for deployment-time overrides of individual values or a chart's entire values config. Dependent service endpoints (e.g. the Keystone API) default to Kubernetes cluster-internal DNS names, but can be overridden to point at external instances. Multiple, alternate SDN and SDS solutions will be integrated, and can be configured at deploy-time. The choice of container images is configurable as well, and can be mixed and matched on a chart-by-chart basis per operator needs. The project aims to be production-grade out-of-box, including configuration for highly available services, resilient storage for the control plane (e.g. Ceph), and security features such as cluster-internal TLS. OpenStack-Helm will follow the OpenStack independent release model[2], and will ensure Helm charts integrate with an OpenStack release once it is released. The team plans to change to cycle-trailing releases when ready, but will need to close the gap between Newton (current-state) and master first. In the meantime, it will model the cycle-trailing branch strategy as it catches up. The team is interested in feedback on this plan and advice from the TC. A number of OpenStack deployment projects pave the way for OpenStack-Helm, including OpenStack Ansible[3], Kolla-Kubernetes[4], OpenStack-Chef[5], and Puppet OpenStack [6]. OpenStack-Helm occupies a unique space in its exclusive use of Kubernetes-native constructs during the deployment process, including Helm packaging and gotpl templating. This tradeoff enables it to be as lightweight and simple as possible, at the expense of operator choice of deployment toolsets. The project team is eager to work with other OpenStack deployment projects to solve common needs and share learnings. The OpenStack-Helm project and the team follow the Four Opens: * Open Source: The project source code[7], specifications[8] and documentation[9] are open source and are developed in the community under the Apache 2.0 license. * Open Design: Design decisions are discussed and formalized via specifications, as well as openly discussed in the IRC chat room[10] and the team meetings[11]. The team had informal meetings at the Atlanta and Denver PTGs, as well as a presentation[12], hands-on workshop, and a related cross-project Forum at the Boston Summit. More of each are planned for Sydney. * Open Development: All OpenStack-Helm code is reviewed and hosted in OpenStack Infra[7]. blueprints and bugs are tracked by Launchpad[13]. Quality gates are run to ensure OpenStack deploys and runs with every patch set, and integration of standard Rally and Tempest tests is underway. * Open Community: The project team and core reviewer team[14] represent multiple employers, and are eager to groom new team members. Project discussions and Q&A are frequently held in the public IRC chat room, and all are welcome. 1. https://wiki.openstack.org/wiki/Openstack-helm 2. https://releases.openstack.org/reference/release_models.html 3. https://wiki.openstack.org/wiki/OpenStackAnsible 4. https://wiki.openstack.org/wiki/Kolla 5. https://wiki.openstack.org/wiki/Chef 6. https://docs.openstack.org/puppet-openstack-guide/latest/ 7. https://git.openstack.org/cgit/openstack/openstack-helm/ 8. https://git.openstack.org/cgit/openstack/openstack-helm/tree/doc/source/specs 9. http://openstack-helm.readthedocs.io/ 10. http://eavesdrop.openstack.org/irclogs/%23openstack-helm/ 11. http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-5/%23openstack-meeting-5.2017-10-03.log.html 12. https://www.openstack.org/videos/search?search=openstack-helm 13. https://launchpad.net/openstack-helm 14. https://review.openstack.org/#/admin/groups/1749,members Change-Id: I70995640b6b760e7bfd7f1ec407b8004c855203f
2017-10-11 13:49:05 -05:00
service: Helm charts for OpenStack services
irc-channel: openstack-helm
mission: >
To provide a collection of Helm charts that simply, resiliently,
and flexibly deploy OpenStack and related services on Kubernetes.
OpenStack-Helm also produces lighweight OCI container images agnostic of
the deployment tooling for OpenStack.
OpenStack-Helm - Helm charts for OpenStack The mission of OpenStack-Helm[1] is to provide a collection of Helm charts that simply, resiliently, and flexibly deploy OpenStack and related services on Kubernetes. It allows the Helm client to be used directly to deploy OpenStack components, and as such is comarable to OpenStack-Ansible (providing playbooks for Ansible) or Puppet OpenStack (providing modules for Puppet). The project leverages native Kubernetes constructs and allows OpenStack projects to be installed in combinations and configurations per unique operator use cases. OpenStack-Helm has a goal to facilitate pushbutton automated deployments, and rolling upgrades with no impact to tenant workloads. There is a one-to-one correspondence between OpenStack projects and Helm charts, and charts are fully independent of one another. This allows for use cases from traditional OpenStack clouds, to deployment of standalone components, and combinations in between. The project provides reasonable, usable configuration by default, and allows for deployment-time overrides of individual values or a chart's entire values config. Dependent service endpoints (e.g. the Keystone API) default to Kubernetes cluster-internal DNS names, but can be overridden to point at external instances. Multiple, alternate SDN and SDS solutions will be integrated, and can be configured at deploy-time. The choice of container images is configurable as well, and can be mixed and matched on a chart-by-chart basis per operator needs. The project aims to be production-grade out-of-box, including configuration for highly available services, resilient storage for the control plane (e.g. Ceph), and security features such as cluster-internal TLS. OpenStack-Helm will follow the OpenStack independent release model[2], and will ensure Helm charts integrate with an OpenStack release once it is released. The team plans to change to cycle-trailing releases when ready, but will need to close the gap between Newton (current-state) and master first. In the meantime, it will model the cycle-trailing branch strategy as it catches up. The team is interested in feedback on this plan and advice from the TC. A number of OpenStack deployment projects pave the way for OpenStack-Helm, including OpenStack Ansible[3], Kolla-Kubernetes[4], OpenStack-Chef[5], and Puppet OpenStack [6]. OpenStack-Helm occupies a unique space in its exclusive use of Kubernetes-native constructs during the deployment process, including Helm packaging and gotpl templating. This tradeoff enables it to be as lightweight and simple as possible, at the expense of operator choice of deployment toolsets. The project team is eager to work with other OpenStack deployment projects to solve common needs and share learnings. The OpenStack-Helm project and the team follow the Four Opens: * Open Source: The project source code[7], specifications[8] and documentation[9] are open source and are developed in the community under the Apache 2.0 license. * Open Design: Design decisions are discussed and formalized via specifications, as well as openly discussed in the IRC chat room[10] and the team meetings[11]. The team had informal meetings at the Atlanta and Denver PTGs, as well as a presentation[12], hands-on workshop, and a related cross-project Forum at the Boston Summit. More of each are planned for Sydney. * Open Development: All OpenStack-Helm code is reviewed and hosted in OpenStack Infra[7]. blueprints and bugs are tracked by Launchpad[13]. Quality gates are run to ensure OpenStack deploys and runs with every patch set, and integration of standard Rally and Tempest tests is underway. * Open Community: The project team and core reviewer team[14] represent multiple employers, and are eager to groom new team members. Project discussions and Q&A are frequently held in the public IRC chat room, and all are welcome. 1. https://wiki.openstack.org/wiki/Openstack-helm 2. https://releases.openstack.org/reference/release_models.html 3. https://wiki.openstack.org/wiki/OpenStackAnsible 4. https://wiki.openstack.org/wiki/Kolla 5. https://wiki.openstack.org/wiki/Chef 6. https://docs.openstack.org/puppet-openstack-guide/latest/ 7. https://git.openstack.org/cgit/openstack/openstack-helm/ 8. https://git.openstack.org/cgit/openstack/openstack-helm/tree/doc/source/specs 9. http://openstack-helm.readthedocs.io/ 10. http://eavesdrop.openstack.org/irclogs/%23openstack-helm/ 11. http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-5/%23openstack-meeting-5.2017-10-03.log.html 12. https://www.openstack.org/videos/search?search=openstack-helm 13. https://launchpad.net/openstack-helm 14. https://review.openstack.org/#/admin/groups/1749,members Change-Id: I70995640b6b760e7bfd7f1ec407b8004c855203f
2017-10-11 13:49:05 -05:00
url: https://wiki.openstack.org/wiki/Openstack-helm
deliverables:
openstack-helm:
release-management: external
OpenStack-Helm - Helm charts for OpenStack The mission of OpenStack-Helm[1] is to provide a collection of Helm charts that simply, resiliently, and flexibly deploy OpenStack and related services on Kubernetes. It allows the Helm client to be used directly to deploy OpenStack components, and as such is comarable to OpenStack-Ansible (providing playbooks for Ansible) or Puppet OpenStack (providing modules for Puppet). The project leverages native Kubernetes constructs and allows OpenStack projects to be installed in combinations and configurations per unique operator use cases. OpenStack-Helm has a goal to facilitate pushbutton automated deployments, and rolling upgrades with no impact to tenant workloads. There is a one-to-one correspondence between OpenStack projects and Helm charts, and charts are fully independent of one another. This allows for use cases from traditional OpenStack clouds, to deployment of standalone components, and combinations in between. The project provides reasonable, usable configuration by default, and allows for deployment-time overrides of individual values or a chart's entire values config. Dependent service endpoints (e.g. the Keystone API) default to Kubernetes cluster-internal DNS names, but can be overridden to point at external instances. Multiple, alternate SDN and SDS solutions will be integrated, and can be configured at deploy-time. The choice of container images is configurable as well, and can be mixed and matched on a chart-by-chart basis per operator needs. The project aims to be production-grade out-of-box, including configuration for highly available services, resilient storage for the control plane (e.g. Ceph), and security features such as cluster-internal TLS. OpenStack-Helm will follow the OpenStack independent release model[2], and will ensure Helm charts integrate with an OpenStack release once it is released. The team plans to change to cycle-trailing releases when ready, but will need to close the gap between Newton (current-state) and master first. In the meantime, it will model the cycle-trailing branch strategy as it catches up. The team is interested in feedback on this plan and advice from the TC. A number of OpenStack deployment projects pave the way for OpenStack-Helm, including OpenStack Ansible[3], Kolla-Kubernetes[4], OpenStack-Chef[5], and Puppet OpenStack [6]. OpenStack-Helm occupies a unique space in its exclusive use of Kubernetes-native constructs during the deployment process, including Helm packaging and gotpl templating. This tradeoff enables it to be as lightweight and simple as possible, at the expense of operator choice of deployment toolsets. The project team is eager to work with other OpenStack deployment projects to solve common needs and share learnings. The OpenStack-Helm project and the team follow the Four Opens: * Open Source: The project source code[7], specifications[8] and documentation[9] are open source and are developed in the community under the Apache 2.0 license. * Open Design: Design decisions are discussed and formalized via specifications, as well as openly discussed in the IRC chat room[10] and the team meetings[11]. The team had informal meetings at the Atlanta and Denver PTGs, as well as a presentation[12], hands-on workshop, and a related cross-project Forum at the Boston Summit. More of each are planned for Sydney. * Open Development: All OpenStack-Helm code is reviewed and hosted in OpenStack Infra[7]. blueprints and bugs are tracked by Launchpad[13]. Quality gates are run to ensure OpenStack deploys and runs with every patch set, and integration of standard Rally and Tempest tests is underway. * Open Community: The project team and core reviewer team[14] represent multiple employers, and are eager to groom new team members. Project discussions and Q&A are frequently held in the public IRC chat room, and all are welcome. 1. https://wiki.openstack.org/wiki/Openstack-helm 2. https://releases.openstack.org/reference/release_models.html 3. https://wiki.openstack.org/wiki/OpenStackAnsible 4. https://wiki.openstack.org/wiki/Kolla 5. https://wiki.openstack.org/wiki/Chef 6. https://docs.openstack.org/puppet-openstack-guide/latest/ 7. https://git.openstack.org/cgit/openstack/openstack-helm/ 8. https://git.openstack.org/cgit/openstack/openstack-helm/tree/doc/source/specs 9. http://openstack-helm.readthedocs.io/ 10. http://eavesdrop.openstack.org/irclogs/%23openstack-helm/ 11. http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-5/%23openstack-meeting-5.2017-10-03.log.html 12. https://www.openstack.org/videos/search?search=openstack-helm 13. https://launchpad.net/openstack-helm 14. https://review.openstack.org/#/admin/groups/1749,members Change-Id: I70995640b6b760e7bfd7f1ec407b8004c855203f
2017-10-11 13:49:05 -05:00
repos:
- openstack/openstack-helm
openstack-helm-images:
release-management: external
repos:
- openstack/openstack-helm-images
OpenStack-Helm - Helm charts for OpenStack The mission of OpenStack-Helm[1] is to provide a collection of Helm charts that simply, resiliently, and flexibly deploy OpenStack and related services on Kubernetes. It allows the Helm client to be used directly to deploy OpenStack components, and as such is comarable to OpenStack-Ansible (providing playbooks for Ansible) or Puppet OpenStack (providing modules for Puppet). The project leverages native Kubernetes constructs and allows OpenStack projects to be installed in combinations and configurations per unique operator use cases. OpenStack-Helm has a goal to facilitate pushbutton automated deployments, and rolling upgrades with no impact to tenant workloads. There is a one-to-one correspondence between OpenStack projects and Helm charts, and charts are fully independent of one another. This allows for use cases from traditional OpenStack clouds, to deployment of standalone components, and combinations in between. The project provides reasonable, usable configuration by default, and allows for deployment-time overrides of individual values or a chart's entire values config. Dependent service endpoints (e.g. the Keystone API) default to Kubernetes cluster-internal DNS names, but can be overridden to point at external instances. Multiple, alternate SDN and SDS solutions will be integrated, and can be configured at deploy-time. The choice of container images is configurable as well, and can be mixed and matched on a chart-by-chart basis per operator needs. The project aims to be production-grade out-of-box, including configuration for highly available services, resilient storage for the control plane (e.g. Ceph), and security features such as cluster-internal TLS. OpenStack-Helm will follow the OpenStack independent release model[2], and will ensure Helm charts integrate with an OpenStack release once it is released. The team plans to change to cycle-trailing releases when ready, but will need to close the gap between Newton (current-state) and master first. In the meantime, it will model the cycle-trailing branch strategy as it catches up. The team is interested in feedback on this plan and advice from the TC. A number of OpenStack deployment projects pave the way for OpenStack-Helm, including OpenStack Ansible[3], Kolla-Kubernetes[4], OpenStack-Chef[5], and Puppet OpenStack [6]. OpenStack-Helm occupies a unique space in its exclusive use of Kubernetes-native constructs during the deployment process, including Helm packaging and gotpl templating. This tradeoff enables it to be as lightweight and simple as possible, at the expense of operator choice of deployment toolsets. The project team is eager to work with other OpenStack deployment projects to solve common needs and share learnings. The OpenStack-Helm project and the team follow the Four Opens: * Open Source: The project source code[7], specifications[8] and documentation[9] are open source and are developed in the community under the Apache 2.0 license. * Open Design: Design decisions are discussed and formalized via specifications, as well as openly discussed in the IRC chat room[10] and the team meetings[11]. The team had informal meetings at the Atlanta and Denver PTGs, as well as a presentation[12], hands-on workshop, and a related cross-project Forum at the Boston Summit. More of each are planned for Sydney. * Open Development: All OpenStack-Helm code is reviewed and hosted in OpenStack Infra[7]. blueprints and bugs are tracked by Launchpad[13]. Quality gates are run to ensure OpenStack deploys and runs with every patch set, and integration of standard Rally and Tempest tests is underway. * Open Community: The project team and core reviewer team[14] represent multiple employers, and are eager to groom new team members. Project discussions and Q&A are frequently held in the public IRC chat room, and all are welcome. 1. https://wiki.openstack.org/wiki/Openstack-helm 2. https://releases.openstack.org/reference/release_models.html 3. https://wiki.openstack.org/wiki/OpenStackAnsible 4. https://wiki.openstack.org/wiki/Kolla 5. https://wiki.openstack.org/wiki/Chef 6. https://docs.openstack.org/puppet-openstack-guide/latest/ 7. https://git.openstack.org/cgit/openstack/openstack-helm/ 8. https://git.openstack.org/cgit/openstack/openstack-helm/tree/doc/source/specs 9. http://openstack-helm.readthedocs.io/ 10. http://eavesdrop.openstack.org/irclogs/%23openstack-helm/ 11. http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-5/%23openstack-meeting-5.2017-10-03.log.html 12. https://www.openstack.org/videos/search?search=openstack-helm 13. https://launchpad.net/openstack-helm 14. https://review.openstack.org/#/admin/groups/1749,members Change-Id: I70995640b6b760e7bfd7f1ec407b8004c855203f
2017-10-11 13:49:05 -05:00
openstack-helm-infra:
release-management: external
OpenStack-Helm - Helm charts for OpenStack The mission of OpenStack-Helm[1] is to provide a collection of Helm charts that simply, resiliently, and flexibly deploy OpenStack and related services on Kubernetes. It allows the Helm client to be used directly to deploy OpenStack components, and as such is comarable to OpenStack-Ansible (providing playbooks for Ansible) or Puppet OpenStack (providing modules for Puppet). The project leverages native Kubernetes constructs and allows OpenStack projects to be installed in combinations and configurations per unique operator use cases. OpenStack-Helm has a goal to facilitate pushbutton automated deployments, and rolling upgrades with no impact to tenant workloads. There is a one-to-one correspondence between OpenStack projects and Helm charts, and charts are fully independent of one another. This allows for use cases from traditional OpenStack clouds, to deployment of standalone components, and combinations in between. The project provides reasonable, usable configuration by default, and allows for deployment-time overrides of individual values or a chart's entire values config. Dependent service endpoints (e.g. the Keystone API) default to Kubernetes cluster-internal DNS names, but can be overridden to point at external instances. Multiple, alternate SDN and SDS solutions will be integrated, and can be configured at deploy-time. The choice of container images is configurable as well, and can be mixed and matched on a chart-by-chart basis per operator needs. The project aims to be production-grade out-of-box, including configuration for highly available services, resilient storage for the control plane (e.g. Ceph), and security features such as cluster-internal TLS. OpenStack-Helm will follow the OpenStack independent release model[2], and will ensure Helm charts integrate with an OpenStack release once it is released. The team plans to change to cycle-trailing releases when ready, but will need to close the gap between Newton (current-state) and master first. In the meantime, it will model the cycle-trailing branch strategy as it catches up. The team is interested in feedback on this plan and advice from the TC. A number of OpenStack deployment projects pave the way for OpenStack-Helm, including OpenStack Ansible[3], Kolla-Kubernetes[4], OpenStack-Chef[5], and Puppet OpenStack [6]. OpenStack-Helm occupies a unique space in its exclusive use of Kubernetes-native constructs during the deployment process, including Helm packaging and gotpl templating. This tradeoff enables it to be as lightweight and simple as possible, at the expense of operator choice of deployment toolsets. The project team is eager to work with other OpenStack deployment projects to solve common needs and share learnings. The OpenStack-Helm project and the team follow the Four Opens: * Open Source: The project source code[7], specifications[8] and documentation[9] are open source and are developed in the community under the Apache 2.0 license. * Open Design: Design decisions are discussed and formalized via specifications, as well as openly discussed in the IRC chat room[10] and the team meetings[11]. The team had informal meetings at the Atlanta and Denver PTGs, as well as a presentation[12], hands-on workshop, and a related cross-project Forum at the Boston Summit. More of each are planned for Sydney. * Open Development: All OpenStack-Helm code is reviewed and hosted in OpenStack Infra[7]. blueprints and bugs are tracked by Launchpad[13]. Quality gates are run to ensure OpenStack deploys and runs with every patch set, and integration of standard Rally and Tempest tests is underway. * Open Community: The project team and core reviewer team[14] represent multiple employers, and are eager to groom new team members. Project discussions and Q&A are frequently held in the public IRC chat room, and all are welcome. 1. https://wiki.openstack.org/wiki/Openstack-helm 2. https://releases.openstack.org/reference/release_models.html 3. https://wiki.openstack.org/wiki/OpenStackAnsible 4. https://wiki.openstack.org/wiki/Kolla 5. https://wiki.openstack.org/wiki/Chef 6. https://docs.openstack.org/puppet-openstack-guide/latest/ 7. https://git.openstack.org/cgit/openstack/openstack-helm/ 8. https://git.openstack.org/cgit/openstack/openstack-helm/tree/doc/source/specs 9. http://openstack-helm.readthedocs.io/ 10. http://eavesdrop.openstack.org/irclogs/%23openstack-helm/ 11. http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-5/%23openstack-meeting-5.2017-10-03.log.html 12. https://www.openstack.org/videos/search?search=openstack-helm 13. https://launchpad.net/openstack-helm 14. https://review.openstack.org/#/admin/groups/1749,members Change-Id: I70995640b6b760e7bfd7f1ec407b8004c855203f
2017-10-11 13:49:05 -05:00
repos:
- openstack/openstack-helm-infra
openstack-helm-plugin:
release-management: external
repos:
- openstack/openstack-helm-plugin
loci:
release-management: none
repos:
- openstack/loci
OpenStackAnsible:
ptl:
name: Dmitriy Rabotyagov
irc: noonedeadpunk
email: noonedeadpunk@gmail.com
service: Ansible playbooks and roles for deployment
irc-channel: openstack-ansible
mission: >
Deploying OpenStack from source in a way that makes it scalable
while also being simple to operate, upgrade, and grow.
url: https://wiki.openstack.org/wiki/OpenStackAnsible
deliverables:
ansible-config_template:
repos:
- openstack/ansible-config_template
openstack-ansible:
repos:
- openstack/openstack-ansible
openstack-ansible-roles:
repos:
- openstack/ansible-role-frrouting
- openstack/ansible-role-httpd
- openstack/ansible-role-qdrouterd
- openstack/ansible-role-pki
- openstack/ansible-role-proxysql
- openstack/ansible-role-python_venv_build
- openstack/ansible-role-systemd_mount
- openstack/ansible-role-systemd_networkd
- openstack/ansible-role-systemd_service
- openstack/ansible-role-uwsgi
- openstack/ansible-role-vault
- openstack/ansible-role-zookeeper
- openstack/ansible-hardening
- openstack/openstack-ansible-apt_package_pinning
- openstack/openstack-ansible-ceph_client
- openstack/openstack-ansible-galera_server
- openstack/openstack-ansible-haproxy_server
- openstack/openstack-ansible-lxc_container_create
- openstack/openstack-ansible-lxc_hosts
- openstack/openstack-ansible-memcached_server
- openstack/openstack-ansible-openstack_hosts
- openstack/openstack-ansible-openstack_openrc
- openstack/openstack-ansible-ops
- openstack/openstack-ansible-os_adjutant
- openstack/openstack-ansible-os_aodh
- openstack/openstack-ansible-os_barbican
- openstack/openstack-ansible-os_blazar
- openstack/openstack-ansible-os_ceilometer
- openstack/openstack-ansible-os_cinder
- openstack/openstack-ansible-os_cloudkitty
- openstack/openstack-ansible-os_designate
- openstack/openstack-ansible-os_glance
- openstack/openstack-ansible-os_gnocchi
- openstack/openstack-ansible-os_heat
- openstack/openstack-ansible-os_horizon
- openstack/openstack-ansible-os_ironic
- openstack/openstack-ansible-os_keystone
- openstack/openstack-ansible-os_magnum
- openstack/openstack-ansible-os_manila
- openstack/openstack-ansible-os_masakari
- openstack/openstack-ansible-os_mistral
- openstack/openstack-ansible-os_monasca
- openstack/openstack-ansible-os_monasca-agent
- openstack/openstack-ansible-os_neutron
- openstack/openstack-ansible-os_nova
- openstack/openstack-ansible-os_octavia
- openstack/openstack-ansible-os_placement
- openstack/openstack-ansible-os_rally
- openstack/openstack-ansible-os_skyline
- openstack/openstack-ansible-os_swift
- openstack/openstack-ansible-os_tacker
- openstack/openstack-ansible-os_tempest
- openstack/openstack-ansible-os_trove
- openstack/openstack-ansible-os_zun
- openstack/openstack-ansible-plugins
- openstack/openstack-ansible-rabbitmq_server
- openstack/openstack-ansible-repo_server
- openstack/openstack-ansible-tests
openstack-ansible-nspawn_containers:
deprecated: wallaby
release-management: deprecated
repos:
- openstack/openstack-ansible-nspawn_container_create
- openstack/openstack-ansible-nspawn_hosts
openstack-ansible-galera_client:
deprecated: victoria
release-management: deprecated
repos:
- openstack/openstack-ansible-galera_client
openstack-ansible-os_congress:
deprecated: victoria
release-management: deprecated
repos:
- openstack/openstack-ansible-os_congress
openstack-ansible-os_karbor:
release-management: deprecated
repos:
- openstack/openstack-ansible-os_karbor
openstack-ansible-os_panko:
deprecated: Xena
release-management: deprecated
repos:
- openstack/openstack-ansible-os_panko
openstack-ansible-rsyslog:
deprecated: Zed
release-management: deprecated
repos:
- openstack/openstack-ansible-rsyslog_server
- openstack/openstack-ansible-rsyslog_client
openstack-ansible-specs:
release-management: none
repos:
- openstack/openstack-ansible-specs
OpenStackSDK:
ptl:
name: Artem Goncharov
irc: gtema
email: artem.goncharov@gmail.com
appointed:
- ussuri
irc-channel: openstack-sdks
service: Multi-cloud Python SDK and CLI for End Users
mission: >
To provide a unified multi-cloud aware SDK and CLI for the OpenStack
REST API exposing both the full set of low-level APIs as well as curated higher
level business logic. Ensure end user is having necessary tools to work
with OpenStack based clouds.
url: https://docs.openstack.org/openstacksdk/latest/
deliverables:
cliff:
repos:
- openstack/cliff
codegenerator:
repos:
- openstack/codegenerator
openapi:
repos:
- openstack/openapi
openstackclient:
repos:
- openstack/openstackclient
openstacksdk:
repos:
- openstack/openstacksdk
os-client-config:
repos:
- openstack/os-client-config
os-service-types:
repos:
- openstack/os-service-types
osc-lib:
repos:
- openstack/osc-lib
python-openstackclient:
repos:
- openstack/python-openstackclient
requestsexceptions:
repos:
- openstack/requestsexceptions
shade:
repos:
- openstack/shade
oslo:
ptl:
name: APPOINTMENT NEEDED
irc: No nick supplied
email: example@example.org
irc-channel: openstack-oslo
service: Common libraries
mission: >
To produce a set of python libraries containing code shared by OpenStack
projects. The APIs provided by these libraries should be high quality,
stable, consistent, documented and generally applicable.
url: https://wiki.openstack.org/wiki/Oslo
deliverables:
automaton:
repos:
- openstack/automaton
castellan:
repos:
- openstack/castellan
cookiecutter:
release-management: none
repos:
- openstack/cookiecutter
debtcollector:
repos:
- openstack/debtcollector
devstack-plugin-amqp1:
release-management: none
repos:
- openstack/devstack-plugin-amqp1
devstack-plugin-kafka:
release-management: none
repos:
- openstack/devstack-plugin-kafka
etcd3gw:
repos:
- openstack/etcd3gw
futurist:
repos:
- openstack/futurist
microversion-parse:
repos:
- openstack/microversion-parse
openstack-doc-tools:
repos:
- openstack/openstack-doc-tools
openstackdocstheme:
repos:
- openstack/openstackdocstheme
os-api-ref:
repos:
- openstack/os-api-ref
oslo-cookiecutter:
release-management: none
repos:
- openstack/oslo-cookiecutter
oslo-specs:
release-management: none
repos:
- openstack/oslo-specs
oslo.cache:
repos:
- openstack/oslo.cache
oslo.concurrency:
repos:
- openstack/oslo.concurrency
oslo.config:
repos:
- openstack/oslo.config
oslo.context:
repos:
- openstack/oslo.context
oslo.db:
repos:
- openstack/oslo.db
oslo.i18n:
repos:
- openstack/oslo.i18n
oslo.limit:
repos:
- openstack/oslo.limit
oslo.log:
repos:
- openstack/oslo.log
oslo.messaging:
repos:
- openstack/oslo.messaging
oslo.metrics:
repos:
- openstack/oslo.metrics
oslo.middleware:
repos:
- openstack/oslo.middleware
oslo.policy:
repos:
- openstack/oslo.policy
oslo.privsep:
repos:
- openstack/oslo.privsep
oslo.reports:
repos:
- openstack/oslo.reports
oslo.rootwrap:
repos:
- openstack/oslo.rootwrap
oslo.serialization:
repos:
- openstack/oslo.serialization
oslo.service:
repos:
- openstack/oslo.service
oslo.tools:
release-management: none
repos:
- openstack/oslo.tools
oslo.upgradecheck:
repos:
- openstack/oslo.upgradecheck
oslo.utils:
repos:
- openstack/oslo.utils
oslo.versionedobjects:
repos:
- openstack/oslo.versionedobjects
oslo.vmware:
repos:
- openstack/oslo.vmware
oslotest:
repos:
- openstack/oslotest
osprofiler:
repos:
- openstack/osprofiler
pbr:
repos:
- openstack/pbr
sphinx-feature-classification:
repos:
- openstack/sphinx-feature-classification
stevedore:
repos:
- openstack/stevedore
taskflow:
repos:
- openstack/taskflow
tooz:
repos:
- openstack/tooz
whereto:
repos:
- openstack/whereto
Puppet OpenStack:
ptl:
name: Takashi Kajinami
irc: tkajinam
email: kajinamit@oss.nttdata.com
appointed:
- yoga
irc-channel: puppet-openstack
service: Puppet modules for deployment
mission: >
The Puppet modules for OpenStack bring scalable and reliable IT automation
to OpenStack cloud deployments.
url: https://docs.openstack.org/puppet-openstack-guide/latest/
deliverables:
puppet-aodh:
repos:
- openstack/puppet-aodh
puppet-barbican:
repos:
- openstack/puppet-barbican
puppet-ceilometer:
repos:
- openstack/puppet-ceilometer
puppet-ceph:
repos:
- openstack/puppet-ceph
puppet-cinder:
repos:
- openstack/puppet-cinder
puppet-cloudkitty:
repos:
- openstack/puppet-cloudkitty
puppet-designate:
repos:
- openstack/puppet-designate
puppet-glance:
repos:
- openstack/puppet-glance
puppet-gnocchi:
repos:
- openstack/puppet-gnocchi
puppet-heat:
repos:
- openstack/puppet-heat
puppet-horizon:
repos:
- openstack/puppet-horizon
puppet-ironic:
repos:
- openstack/puppet-ironic
puppet-keystone:
repos:
- openstack/puppet-keystone
puppet-magnum:
repos:
- openstack/puppet-magnum
puppet-manila:
repos:
- openstack/puppet-manila
puppet-mistral:
repos:
- openstack/puppet-mistral
puppet-neutron:
repos:
- openstack/puppet-neutron
puppet-nova:
repos:
- openstack/puppet-nova
puppet-octavia:
repos:
- openstack/puppet-octavia
puppet-openstack-cookiecutter:
release-management: none
repos:
- openstack/puppet-openstack-cookiecutter
puppet-openstack-guide:
release-management: none
repos:
- openstack/puppet-openstack-guide
puppet-openstack-integration:
repos:
- openstack/puppet-openstack-integration
puppet-openstack_extras:
repos:
- openstack/puppet-openstack_extras
puppet-openstack_spec_helper:
repos:
- openstack/puppet-openstack_spec_helper
puppet-openstacklib:
repos:
- openstack/puppet-openstacklib
puppet-oslo:
repos:
- openstack/puppet-oslo
puppet-ovn:
repos:
- openstack/puppet-ovn
puppet-placement:
repos:
- openstack/puppet-placement
puppet-swift:
repos:
- openstack/puppet-swift
puppet-tempest:
repos:
- openstack/puppet-tempest
puppet-trove:
repos:
- openstack/puppet-trove
puppet-vitrage:
repos:
- openstack/puppet-vitrage
puppet-vswitch:
repos:
- openstack/puppet-vswitch
puppet-watcher:
repos:
- openstack/puppet-watcher
puppet-zaqar:
repos:
- openstack/puppet-zaqar
Quality Assurance:
ptl:
name: Martin Kopec
irc: kopecmartin
email: mkopec@redhat.com
irc-channel: openstack-qa
mission: >
Develop, maintain, and initiate tools and plans to ensure the upstream
stability and quality of OpenStack, and its release readiness at any point
during the release cycle.
url: https://wiki.openstack.org/wiki/QA
deliverables:
bashate:
repos:
- openstack/bashate
coverage2sql:
release-management: none
repos:
- openstack/coverage2sql
devstack:
repos:
- openstack/devstack
devstack-plugin-ceph:
repos:
- openstack/devstack-plugin-ceph
devstack-plugin-cookiecutter:
release-management: none
repos:
- openstack/devstack-plugin-cookiecutter
devstack-plugin-open-cas:
release-management: none
repos:
- openstack/devstack-plugin-open-cas
devstack-plugin-prometheus:
release-management: none
repos:
- openstack/devstack-plugin-prometheus
devstack-tools:
repos:
- openstack/devstack-tools
devstack-vagrant:
release-management: none
repos:
- openstack/devstack-vagrant
eslint-config-openstack:
repos:
- openstack/eslint-config-openstack
grenade:
repos:
- openstack/grenade
hacking:
repos:
- openstack/hacking
karma-subunit-reporter:
repos:
- openstack/karma-subunit-reporter
os-performance-tools:
release-management: none
repos:
- openstack/os-performance-tools
os-testr:
repos:
- openstack/os-testr
qa-specs:
release-management: none
repos:
- openstack/qa-specs
stackviz:
release-management: none
repos:
- openstack/stackviz
tempest:
repos:
- openstack/tempest
tempest-plugin-cookiecutter:
release-management: none
repos:
- openstack/tempest-plugin-cookiecutter
tempest-stress:
release-management: none
repos:
- openstack/tempest-stress
devstack-plugin-container:
repos:
- openstack/devstack-plugin-container
devstack-plugin-nfs:
release-management: none
repos:
- openstack/devstack-plugin-nfs
whitebox-tempest-plugin:
repos:
- openstack/whitebox-tempest-plugin
rally:
ptl:
name: Andrey Kurilin
irc: andreykurilin
email: andr.kurilin@gmail.com
appointed:
- victoria
- '2023.2'
- '2024.1'
irc-channel: openstack-rally
service: Benchmark service
mission: >
To provide a framework for performance analysis and benchmarking of
individual OpenStack components as well as full production OpenStack
cloud deployments.
url: https://wiki.openstack.org/wiki/Rally
deliverables:
rally:
repos:
- openstack/rally
rally-openstack:
repos:
- openstack/rally-openstack
performance-docs:
release-management: none
repos:
- openstack/performance-docs
Release Management:
leadership_type: distributed
irc-channel: openstack-release
mission: >
Coordinating the release of OpenStack deliverables, by defining the
overall development cycle, release models, publication processes,
versioning rules and tools, then enabling project teams to produce
their own releases.
url: https://wiki.openstack.org/wiki/Release_Management
deliverables:
release-test:
repos:
- openstack/release-test
releases:
release-management: none
repos:
- openstack/releases
reno:
repos:
- openstack/reno
specs-cookiecutter:
release-management: none
repos:
- openstack/specs-cookiecutter
liaisons:
release:
- name: Thierry Carrez
irc: ttx
email: thierry@openstack.org
- name: Elod Illes
irc: elodilles
email: elod.illes@est.tech
- name: Jens Harbott
irc: frickler
email: frickler@offenerstapel.de
tact-sig:
- name: Jeremy Stanley
irc: fungi
email: fungi@yuggoth.org
security:
- name: Thierry Carrez
irc: ttx
email: thierry@openstack.org
tc-liaison:
- name: Ghanshyam Mann
irc: gmann
email: gmann@ghanshyammann.com
requirements:
leadership_type: distributed
irc-channel: openstack-requirements
service: Common dependency management
mission: >
Coordinating and converging the libraries used by OpenStack projects,
while ensuring that all libraries are compatible both technically and
from a licensing standpoint.
url: https://wiki.openstack.org/wiki/Requirements
deliverables:
requirements:
repos:
- openstack/requirements
liaisons:
release:
- name: Matthew Thode
irc: prometheanfire
email: mthode@mthode.org
- name: Tony Breeds
irc: tonyb
email: tony@bakeyournoodle.com
tact-sig:
- name: Matthew Thode
irc: prometheanfire
email: mthode@mthode.org
- name: Tony Breeds
irc: tonyb
email: tony@bakeyournoodle.com
security:
- name: Matthew Thode
irc: prometheanfire
email: mthode@mthode.org
- name: Tony Breeds
irc: tonyb
email: tony@bakeyournoodle.com
tc-liaison:
- name: Ghanshyam Mann
irc: gmann
email: gmann@ghanshyammann.com
- name: Dmitriy Rabotyagov
irc: noonedeadpunk
email: noonedeadpunk@gmail.com
Add Skyline as an official project Skyline is an OpenStack dashboard optimized by UI and UE. It has a modern technology stack and ecology, is easier for developers to maintain and operate by users, and has higher concurrency performance. Here are two videos to preview Skyline: - Skyline technical overview[1]. - Skyline dashboard operating demo[2]. Skyline has the following technical advantages: 1. Separation of concerns, front-end focus on functional design and user experience, back-end focus on data logic. 2. Embrace modern browser technology and ecology: React, Ant Design, and Mobx. 3. Most functions directly call OpenStack-API, the call chain is simple, the logic is clearer, and the API responds quickly. 4. Use React component to process rendering, the page display process is fast and smooth, bringing users a better UI and UE experience. At present, Skyline has completed the function development of OpenStack core component, as well as most of the functions of VPNaaS, Octavia and other components. corresponding automated test jobs[3][4] are also integrated on Zuul, and there is good code coverage. Devstack deployment integration has also been completed, and integration of kolla and kolla-ansible will complete pending patch[5][6] after Skyline becomes an official project. Skyline’s next roadmap will be to cover all existing functions of Horizon and complete the page development of other OpenStack components. [1] https://www.youtube.com/watch?v=Ro8tROYKDlE [2] https://www.youtube.com/watch?v=pFAJLwzxv0A [3] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-apiserver [4] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-console [5] https://review.opendev.org/c/openstack/kolla/+/810796 [6] https://review.opendev.org/c/openstack/kolla-ansible/+/810566 Change-Id: Ib91a241c64351c5e69023b2523408c75b80ff74d
2021-10-14 23:42:20 +08:00
skyline:
ptl:
name: Wenxiang Wu
irc: wu_wenxiang
email: wu.wenxiang@99cloud.net
appointed:
- zed
- '2024.1'
- '2024.2'
Add Skyline as an official project Skyline is an OpenStack dashboard optimized by UI and UE. It has a modern technology stack and ecology, is easier for developers to maintain and operate by users, and has higher concurrency performance. Here are two videos to preview Skyline: - Skyline technical overview[1]. - Skyline dashboard operating demo[2]. Skyline has the following technical advantages: 1. Separation of concerns, front-end focus on functional design and user experience, back-end focus on data logic. 2. Embrace modern browser technology and ecology: React, Ant Design, and Mobx. 3. Most functions directly call OpenStack-API, the call chain is simple, the logic is clearer, and the API responds quickly. 4. Use React component to process rendering, the page display process is fast and smooth, bringing users a better UI and UE experience. At present, Skyline has completed the function development of OpenStack core component, as well as most of the functions of VPNaaS, Octavia and other components. corresponding automated test jobs[3][4] are also integrated on Zuul, and there is good code coverage. Devstack deployment integration has also been completed, and integration of kolla and kolla-ansible will complete pending patch[5][6] after Skyline becomes an official project. Skyline’s next roadmap will be to cover all existing functions of Horizon and complete the page development of other OpenStack components. [1] https://www.youtube.com/watch?v=Ro8tROYKDlE [2] https://www.youtube.com/watch?v=pFAJLwzxv0A [3] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-apiserver [4] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-console [5] https://review.opendev.org/c/openstack/kolla/+/810796 [6] https://review.opendev.org/c/openstack/kolla-ansible/+/810566 Change-Id: Ib91a241c64351c5e69023b2523408c75b80ff74d
2021-10-14 23:42:20 +08:00
irc-channel: openstack-skyline
service: Dashboard
mission: >
Skyline is an OpenStack dashboard optimized by UI and UX.
It has a modern technology stack and ecosystem, is easier for
developers to maintain and operate by users, and has higher
concurrency performance.
url: https://wiki.openstack.org/wiki/Skyline
deliverables:
skyline-apiserver:
repos:
- openstack/skyline-apiserver
Add Skyline as an official project Skyline is an OpenStack dashboard optimized by UI and UE. It has a modern technology stack and ecology, is easier for developers to maintain and operate by users, and has higher concurrency performance. Here are two videos to preview Skyline: - Skyline technical overview[1]. - Skyline dashboard operating demo[2]. Skyline has the following technical advantages: 1. Separation of concerns, front-end focus on functional design and user experience, back-end focus on data logic. 2. Embrace modern browser technology and ecology: React, Ant Design, and Mobx. 3. Most functions directly call OpenStack-API, the call chain is simple, the logic is clearer, and the API responds quickly. 4. Use React component to process rendering, the page display process is fast and smooth, bringing users a better UI and UE experience. At present, Skyline has completed the function development of OpenStack core component, as well as most of the functions of VPNaaS, Octavia and other components. corresponding automated test jobs[3][4] are also integrated on Zuul, and there is good code coverage. Devstack deployment integration has also been completed, and integration of kolla and kolla-ansible will complete pending patch[5][6] after Skyline becomes an official project. Skyline’s next roadmap will be to cover all existing functions of Horizon and complete the page development of other OpenStack components. [1] https://www.youtube.com/watch?v=Ro8tROYKDlE [2] https://www.youtube.com/watch?v=pFAJLwzxv0A [3] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-apiserver [4] https://zuul.opendev.org/t/openstack/project/opendev.org/skyline/skyline-console [5] https://review.opendev.org/c/openstack/kolla/+/810796 [6] https://review.opendev.org/c/openstack/kolla-ansible/+/810566 Change-Id: Ib91a241c64351c5e69023b2523408c75b80ff74d
2021-10-14 23:42:20 +08:00
skyline-console:
repos:
- openstack/skyline-console
Storlets to become official - Proposed governance change [1] Has a summary as well detailed desciption of all that the team has done following the comments given in August. The below is the original commit message describing the project (with some necessary updates). The project offers the ability to run user defined functions within Openstack Swift nodes. For more info please refer to the docs (built from source) here: http://storlets.readthedocs.io/en/latest/ The project combines a Swift middleware together with Docker to execute the user defined Java/Python programs inside a container, which in turn runs on Swift nodes. In addition the repository has several sample storlets used for functional tests (as well as examples). Open Source: License: Apache 2.0 license. Libraries: On top of what swift requires, storlets make use of docker, OpenJDK JRE-8 plus standard Java libs. Non standard java libs that we use are: - simple-json (Apache 2.0 lic) - logback-classic, logback-core (EPL v1.0 and the LGPL 2.1) - slf4j-api (MIT lic) Open Community: We are currently 5 active contributors, 1 from IBM, 3 from NTT and this commit author (independent developer) we hold weekly meetings in #openstack-meetings, with agenda posted in [2] Open Development: The project is integrated with the Openstack CI, and was created according to Openstack "Project Creator's Guide". We work closely with the Swift team, and one of our cores is also a core in Swift. Yours truly was elected to be the first PTL by the team. Our current and near term goals are documented in our wiki. Open Design: So far our design meetings were alongside those of Swift. Otherwise, most of the work is done through Gerrit and our IRC channel. In addition we have the [storlets] prefix within the openstack-dev ML. Interoperability: Our user facing APIs are actually an extension of Swift’s API (http://storlets.readthedocs.io/en/latest/api/overview_api.html) To the best of my understanding, storlets being a Swift extension do not pose any authentication restriction or integration issues. Testing interface: We have regular unit/functional/python style tests running with every review request. We generate a source tar ball of the python code as part of the deployment scripts, and generate documentation. The project has been presented in the Paris, Tokyo and Barcelona summits. Reasoning about languages used: We use Python, Java and C. Java and Python are used as the programming language for storlets. That is, the user defined code running inside the Docker container on the storage node can be Java or Python. Java is widly used in the Big-Data analytics world as well as in the media world (as demonstrated in our Paris and Barcelona talks). Similar to OpenStack SDK we would like to support a set of bindings and we are currently working on a Python 'binding'. We use C for passing fds between Swift and the Dockered storlet, and only for that purpose. Unfortunately, neither python 2.7 nor Java have support of passing fds between processes (over UDS). This feature is crucial for us as we want to keep the Docker container isolated as much as possible. Apparently, python 3.x does have such support, but seems that on the Java side we will still need the C code. Python is used for the Swift middleware. The middleware intercepts Swift requests that wish to invoke a storlet on some data and routes the data to/from the storlet. JRE Dependency: We are now using OpenJDK. Is this being implemented anywhere else: There have been https://zerovm.readthedocs.io/zerocloud/overview.html However, after being aquired by Rackspace they have 'disappeared'. We have tried to design storlets so that different runtime sandboxes are pluggable with Docker containers being our first implementation of a runtime sandbox. Having said that, as long as we have only one implementation, this design has not been put to the test. There is also what has been a proprietary solution and now opensourced https://www.joyent.com/manta Joyent was aquired by Samsung this year. [1] https://etherpad.openstack.org/p/storlets-big-tent [2] https://wiki.openstack.org/wiki/Meetings/Storlets Change-Id: Icf9dd346c731dd7378ba9015bd3cbc31e4246a6a
2016-08-10 10:22:34 +03:00
storlets:
ptl:
name: Takashi Kajinami
irc: tkajinam
email: kajinamit@oss.nttdata.com
Storlets to become official - Proposed governance change [1] Has a summary as well detailed desciption of all that the team has done following the comments given in August. The below is the original commit message describing the project (with some necessary updates). The project offers the ability to run user defined functions within Openstack Swift nodes. For more info please refer to the docs (built from source) here: http://storlets.readthedocs.io/en/latest/ The project combines a Swift middleware together with Docker to execute the user defined Java/Python programs inside a container, which in turn runs on Swift nodes. In addition the repository has several sample storlets used for functional tests (as well as examples). Open Source: License: Apache 2.0 license. Libraries: On top of what swift requires, storlets make use of docker, OpenJDK JRE-8 plus standard Java libs. Non standard java libs that we use are: - simple-json (Apache 2.0 lic) - logback-classic, logback-core (EPL v1.0 and the LGPL 2.1) - slf4j-api (MIT lic) Open Community: We are currently 5 active contributors, 1 from IBM, 3 from NTT and this commit author (independent developer) we hold weekly meetings in #openstack-meetings, with agenda posted in [2] Open Development: The project is integrated with the Openstack CI, and was created according to Openstack "Project Creator's Guide". We work closely with the Swift team, and one of our cores is also a core in Swift. Yours truly was elected to be the first PTL by the team. Our current and near term goals are documented in our wiki. Open Design: So far our design meetings were alongside those of Swift. Otherwise, most of the work is done through Gerrit and our IRC channel. In addition we have the [storlets] prefix within the openstack-dev ML. Interoperability: Our user facing APIs are actually an extension of Swift’s API (http://storlets.readthedocs.io/en/latest/api/overview_api.html) To the best of my understanding, storlets being a Swift extension do not pose any authentication restriction or integration issues. Testing interface: We have regular unit/functional/python style tests running with every review request. We generate a source tar ball of the python code as part of the deployment scripts, and generate documentation. The project has been presented in the Paris, Tokyo and Barcelona summits. Reasoning about languages used: We use Python, Java and C. Java and Python are used as the programming language for storlets. That is, the user defined code running inside the Docker container on the storage node can be Java or Python. Java is widly used in the Big-Data analytics world as well as in the media world (as demonstrated in our Paris and Barcelona talks). Similar to OpenStack SDK we would like to support a set of bindings and we are currently working on a Python 'binding'. We use C for passing fds between Swift and the Dockered storlet, and only for that purpose. Unfortunately, neither python 2.7 nor Java have support of passing fds between processes (over UDS). This feature is crucial for us as we want to keep the Docker container isolated as much as possible. Apparently, python 3.x does have such support, but seems that on the Java side we will still need the C code. Python is used for the Swift middleware. The middleware intercepts Swift requests that wish to invoke a storlet on some data and routes the data to/from the storlet. JRE Dependency: We are now using OpenJDK. Is this being implemented anywhere else: There have been https://zerovm.readthedocs.io/zerocloud/overview.html However, after being aquired by Rackspace they have 'disappeared'. We have tried to design storlets so that different runtime sandboxes are pluggable with Docker containers being our first implementation of a runtime sandbox. Having said that, as long as we have only one implementation, this design has not been put to the test. There is also what has been a proprietary solution and now opensourced https://www.joyent.com/manta Joyent was aquired by Samsung this year. [1] https://etherpad.openstack.org/p/storlets-big-tent [2] https://wiki.openstack.org/wiki/Meetings/Storlets Change-Id: Icf9dd346c731dd7378ba9015bd3cbc31e4246a6a
2016-08-10 10:22:34 +03:00
irc-channel: openstack-storlets
service: Compute inside Object Storage service
mission: >
To enable a user friendly, cost effective scalable and secure way for
executing storage centric user defined functions near the data within
OpenStack Swift
url: https://wiki.openstack.org/wiki/Storlets
deliverables:
storlets:
repos:
- openstack/storlets
Propose Sunbeam as a new OpenStack project Sunbeam builds on the experience of deploying OpenStack using the OpenStack Charms project providing a new approach to the deployment and operation of an OpenStack cloud which is more contained and lighter-weight in terms of resource consumption for the OpenStack components. Juju and Charms remain a core component of this project; Kubernetes is targetted as a substrate for the majority of the control plane with data plane components such as the hypervisor still residing directly on the underlying servers - a best of both worlds approach. OCI containers are used for delivery of the K8S workloads and Snap packages are used for components that are installed directly on servers. MicroK8S and Ubuntu are the initial deployment targets for Sunbeam but due to the containerised approach for workload deployment it would be possible to target other Kubernetes distributions and operating systems in the future if the community wished to move in that direction. Currently the Juju charms that support Sunbeam are part of the OpenStack Charms project but there is no dependency between the two sets of charms; by placing the Sunbeam charms and other future project components including an expanded set of charms, supporting snaps and documentation under a separate project we make it clear that this is a distinct, separate effort from the objectives of the OpenStack Charms project and avoid confusion for the community around both projects. The OpenStack Charms project will continue to deliver support for new OpenStack releases while Sunbeam incubates into a production grade solution for the deployment and operation of OpenStack. An upgrade path between OpenStack Charm deployed clouds and Sunbeam is also on the longer term roadmap. Change-Id: Ia8ae01a073d5fb0ce5f8e4e73a18d0913a49e556
2023-04-20 09:51:47 +01:00
sunbeam:
ptl:
name: Guillaume Boutry
irc: gboutry
email: guillaume.boutry@canonical.com
appointed:
- '2024.1'
Propose Sunbeam as a new OpenStack project Sunbeam builds on the experience of deploying OpenStack using the OpenStack Charms project providing a new approach to the deployment and operation of an OpenStack cloud which is more contained and lighter-weight in terms of resource consumption for the OpenStack components. Juju and Charms remain a core component of this project; Kubernetes is targetted as a substrate for the majority of the control plane with data plane components such as the hypervisor still residing directly on the underlying servers - a best of both worlds approach. OCI containers are used for delivery of the K8S workloads and Snap packages are used for components that are installed directly on servers. MicroK8S and Ubuntu are the initial deployment targets for Sunbeam but due to the containerised approach for workload deployment it would be possible to target other Kubernetes distributions and operating systems in the future if the community wished to move in that direction. Currently the Juju charms that support Sunbeam are part of the OpenStack Charms project but there is no dependency between the two sets of charms; by placing the Sunbeam charms and other future project components including an expanded set of charms, supporting snaps and documentation under a separate project we make it clear that this is a distinct, separate effort from the objectives of the OpenStack Charms project and avoid confusion for the community around both projects. The OpenStack Charms project will continue to deliver support for new OpenStack releases while Sunbeam incubates into a production grade solution for the deployment and operation of OpenStack. An upgrade path between OpenStack Charm deployed clouds and Sunbeam is also on the longer term roadmap. Change-Id: Ia8ae01a073d5fb0ce5f8e4e73a18d0913a49e556
2023-04-20 09:51:47 +01:00
irc-channel: openstack-sunbeam
service: Deployment and operational tooling for OpenStack
mission: >
To enable deployment and operation of OpenStack at any scale - from a
single node to small scale edge deployments and large scale clouds
containing many 1000's of hypervisors - leveraging a hybrid deployment
model using Juju to manage both Kubernetes components and machine based
components through the use of charms.
url: https://opendev.org/openstack/sunbeam-charms/src/branch/main/ops-sunbeam/README.rst
Propose Sunbeam as a new OpenStack project Sunbeam builds on the experience of deploying OpenStack using the OpenStack Charms project providing a new approach to the deployment and operation of an OpenStack cloud which is more contained and lighter-weight in terms of resource consumption for the OpenStack components. Juju and Charms remain a core component of this project; Kubernetes is targetted as a substrate for the majority of the control plane with data plane components such as the hypervisor still residing directly on the underlying servers - a best of both worlds approach. OCI containers are used for delivery of the K8S workloads and Snap packages are used for components that are installed directly on servers. MicroK8S and Ubuntu are the initial deployment targets for Sunbeam but due to the containerised approach for workload deployment it would be possible to target other Kubernetes distributions and operating systems in the future if the community wished to move in that direction. Currently the Juju charms that support Sunbeam are part of the OpenStack Charms project but there is no dependency between the two sets of charms; by placing the Sunbeam charms and other future project components including an expanded set of charms, supporting snaps and documentation under a separate project we make it clear that this is a distinct, separate effort from the objectives of the OpenStack Charms project and avoid confusion for the community around both projects. The OpenStack Charms project will continue to deliver support for new OpenStack releases while Sunbeam incubates into a production grade solution for the deployment and operation of OpenStack. An upgrade path between OpenStack Charm deployed clouds and Sunbeam is also on the longer term roadmap. Change-Id: Ia8ae01a073d5fb0ce5f8e4e73a18d0913a49e556
2023-04-20 09:51:47 +01:00
deliverables:
sunbeam-charms:
release-management: external
repos:
- openstack/sunbeam-charms
swift:
ptl:
name: Tim Burke
irc: timburke
email: tburke@nvidia.com
appointed:
- victoria
- '2023.1'
- '2023.2'
- '2025.1'
irc-channel: openstack-swift
service: Object Storage service
mission: >
Provide software for storing and retrieving lots of data
with a simple API. Built for scale and optimized for durability,
availability, and concurrency across the entire data set.
url: https://wiki.openstack.org/wiki/Swift
deliverables:
liberasurecode:
release-management: none
repos:
- openstack/liberasurecode
pyeclib:
release-management: none
repos:
- openstack/pyeclib
python-swiftclient:
repos:
- openstack/python-swiftclient
swift:
repos:
- openstack/swift
swift-bench:
repos:
- openstack/swift-bench
Applying Tacker for Big Tent Tacker is an OpenStack ecosystem project with a mission to build NFV Orchestration services for OpenStack. Tacker intends to support life-cycle management of Network Services and Virtual Network Functions (VNF). This service is designed to be compatible with the ETSI NFV Architecture Framework [1]. NFV Orchestration consists of two major components - VNF Manager and NFV Orchestrator. This project envisions to build both of these components under the OpenStack platform. The VNF Manager (VNFM) handles the life-cycle management of VNFs. It would instantiate VNFs on an OpenStack VIM and facilitate configuration, monitoring, healing and scaling of the VMs backing the VNF. Tacker plans to use many existing OpenStack services to realize these features. NFV Orchestrator (NFVO) provides end-to-end Network Service Orchestration. NFVO would in turn use VNFM component to stand up the VNFs composed in a Network Service. NFVO will also render VNF Forwarding Graphs (VNFFG) using a Service Function Chaining (SFC) API across the instantiated VNFs. Tacker intends to use Neutron's networking-sfc API [2] for this purpose. NFV work flows are typically described using templates. The current template schema of choice is TOSCA which is based on OASIS Standard [3]. At this moment TOSCA has the leading mindshare across VNF vendors and operators. Tacker project is working closely with OASIS TOSCA NFV standards group to shape the evolution of its Simple Profile for NFV [4]. The project is also leveraging and contributing to the Heat Translator's tosca-parser work [5]. Beyond TOSCA other templating schemes can be introduced in the future. Tacker started as a ServiceVM project under Neutron during the Atlanta Summit(2014). The evolution was slow and the interest totally dropped after the neutron vendor plugin decomposition. However the project had code assets for a template based lifecycle management of ServiceVMs. The remaining Tacker team members met in early 2015, brainstormed and eventually decided to pivot to ETSI NFV Orchestration use case. ETSI NFV envisions an Information Model based Orchestration and Life Cycle management of Virtual Network Functions. Since then Tacker developer community grew over the last three cycles and now getting contributions from a diverse set of participants [6] [7]. Tacker is also now actively collaborating with downstream OPNFV Projects like SFC [8], Parser [9] and Multisite [10]. Tacker functionality has been demonstrated in both OpenStack Vancouver and Tokyo Summits and in the recent OPNFV Summit. Tacker project strictly follows the Four Opens [11] suggested by OpenStack Foundation. Tacker code has been developed under an Apache 2.0 license. All code is submitted and reviewed through OpenStack Gerrit. The project maintains a core team that approves all changes. Bugs are filed, reviewed and tracked in Launchpad [12]. The project obeys the coordinated project interfaces, including tox, pbr, global-requirements, etc. Tacker gate now runs pep8, py27, docs tasks and a dsvm functional test. In summary, before Tacker, operators are expected to string together custom solutions using Heat, Ceilometer, etc. to achieve similar functionality. Tacker reduces such duplicate and complex effort in the industry by bringing together a community of NFV operators and VNF vendors to collaborate and build out a template based workflow engine and a higher level OpenStack "NFV Orchestration" API. [1] https://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_nfv002v010201p.pdf [2] http://docs.openstack.org/developer/networking-sfc/ [3] https://www.oasis-open.org/committees/tosca/ [4] http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/tosca-nfv-v1.0.html [5] https://github.com/openstack/tosca-parser [6] http://stackalytics.openstack.org/?project_type=openstack-others&module=tacker&metric=patches [7] http://stackalytics.com/report/contribution/tacker/90 [8] https://wiki.opnfv.org/service_function_chaining [9] https://wiki.opnfv.org/parser [10] https://wiki.opnfv.org/multisite [11] https://github.com/openstack/governance/blob/master/reference/opens.rst [12] https://launchpad.net/tacker [13] http://git.openstack.org/cgit/openstack/tacker [14] http://git.openstack.org/cgit/openstack/python-tackerclient [15] http://git.openstack.org/cgit/openstack/tacker-horizon [16] http://git.openstack.org/cgit/openstack/tacker-specs Change-Id: Idd7e9e8e4d428f9de28d8527e2f459b8ab8b8288
2016-02-04 00:12:41 +00:00
tacker:
ptl:
name: Yasufumi Ogawa
irc: yasufum
email: yasufum.o@gmail.com
appointed:
- victoria
Applying Tacker for Big Tent Tacker is an OpenStack ecosystem project with a mission to build NFV Orchestration services for OpenStack. Tacker intends to support life-cycle management of Network Services and Virtual Network Functions (VNF). This service is designed to be compatible with the ETSI NFV Architecture Framework [1]. NFV Orchestration consists of two major components - VNF Manager and NFV Orchestrator. This project envisions to build both of these components under the OpenStack platform. The VNF Manager (VNFM) handles the life-cycle management of VNFs. It would instantiate VNFs on an OpenStack VIM and facilitate configuration, monitoring, healing and scaling of the VMs backing the VNF. Tacker plans to use many existing OpenStack services to realize these features. NFV Orchestrator (NFVO) provides end-to-end Network Service Orchestration. NFVO would in turn use VNFM component to stand up the VNFs composed in a Network Service. NFVO will also render VNF Forwarding Graphs (VNFFG) using a Service Function Chaining (SFC) API across the instantiated VNFs. Tacker intends to use Neutron's networking-sfc API [2] for this purpose. NFV work flows are typically described using templates. The current template schema of choice is TOSCA which is based on OASIS Standard [3]. At this moment TOSCA has the leading mindshare across VNF vendors and operators. Tacker project is working closely with OASIS TOSCA NFV standards group to shape the evolution of its Simple Profile for NFV [4]. The project is also leveraging and contributing to the Heat Translator's tosca-parser work [5]. Beyond TOSCA other templating schemes can be introduced in the future. Tacker started as a ServiceVM project under Neutron during the Atlanta Summit(2014). The evolution was slow and the interest totally dropped after the neutron vendor plugin decomposition. However the project had code assets for a template based lifecycle management of ServiceVMs. The remaining Tacker team members met in early 2015, brainstormed and eventually decided to pivot to ETSI NFV Orchestration use case. ETSI NFV envisions an Information Model based Orchestration and Life Cycle management of Virtual Network Functions. Since then Tacker developer community grew over the last three cycles and now getting contributions from a diverse set of participants [6] [7]. Tacker is also now actively collaborating with downstream OPNFV Projects like SFC [8], Parser [9] and Multisite [10]. Tacker functionality has been demonstrated in both OpenStack Vancouver and Tokyo Summits and in the recent OPNFV Summit. Tacker project strictly follows the Four Opens [11] suggested by OpenStack Foundation. Tacker code has been developed under an Apache 2.0 license. All code is submitted and reviewed through OpenStack Gerrit. The project maintains a core team that approves all changes. Bugs are filed, reviewed and tracked in Launchpad [12]. The project obeys the coordinated project interfaces, including tox, pbr, global-requirements, etc. Tacker gate now runs pep8, py27, docs tasks and a dsvm functional test. In summary, before Tacker, operators are expected to string together custom solutions using Heat, Ceilometer, etc. to achieve similar functionality. Tacker reduces such duplicate and complex effort in the industry by bringing together a community of NFV operators and VNF vendors to collaborate and build out a template based workflow engine and a higher level OpenStack "NFV Orchestration" API. [1] https://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_nfv002v010201p.pdf [2] http://docs.openstack.org/developer/networking-sfc/ [3] https://www.oasis-open.org/committees/tosca/ [4] http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/tosca-nfv-v1.0.html [5] https://github.com/openstack/tosca-parser [6] http://stackalytics.openstack.org/?project_type=openstack-others&module=tacker&metric=patches [7] http://stackalytics.com/report/contribution/tacker/90 [8] https://wiki.opnfv.org/service_function_chaining [9] https://wiki.opnfv.org/parser [10] https://wiki.opnfv.org/multisite [11] https://github.com/openstack/governance/blob/master/reference/opens.rst [12] https://launchpad.net/tacker [13] http://git.openstack.org/cgit/openstack/tacker [14] http://git.openstack.org/cgit/openstack/python-tackerclient [15] http://git.openstack.org/cgit/openstack/tacker-horizon [16] http://git.openstack.org/cgit/openstack/tacker-specs Change-Id: Idd7e9e8e4d428f9de28d8527e2f459b8ab8b8288
2016-02-04 00:12:41 +00:00
irc-channel: tacker
service: NFV Orchestration service
mission: >
To implement Network Function Virtualization (NFV) Orchestration services
and libraries for end-to-end life-cycle management of Network Services
and Virtual Network Functions (VNFs).
url: https://wiki.openstack.org/wiki/Tacker
deliverables:
tacker:
repos:
- openstack/tacker
tacker-horizon:
repos:
- openstack/tacker-horizon
Applying Tacker for Big Tent Tacker is an OpenStack ecosystem project with a mission to build NFV Orchestration services for OpenStack. Tacker intends to support life-cycle management of Network Services and Virtual Network Functions (VNF). This service is designed to be compatible with the ETSI NFV Architecture Framework [1]. NFV Orchestration consists of two major components - VNF Manager and NFV Orchestrator. This project envisions to build both of these components under the OpenStack platform. The VNF Manager (VNFM) handles the life-cycle management of VNFs. It would instantiate VNFs on an OpenStack VIM and facilitate configuration, monitoring, healing and scaling of the VMs backing the VNF. Tacker plans to use many existing OpenStack services to realize these features. NFV Orchestrator (NFVO) provides end-to-end Network Service Orchestration. NFVO would in turn use VNFM component to stand up the VNFs composed in a Network Service. NFVO will also render VNF Forwarding Graphs (VNFFG) using a Service Function Chaining (SFC) API across the instantiated VNFs. Tacker intends to use Neutron's networking-sfc API [2] for this purpose. NFV work flows are typically described using templates. The current template schema of choice is TOSCA which is based on OASIS Standard [3]. At this moment TOSCA has the leading mindshare across VNF vendors and operators. Tacker project is working closely with OASIS TOSCA NFV standards group to shape the evolution of its Simple Profile for NFV [4]. The project is also leveraging and contributing to the Heat Translator's tosca-parser work [5]. Beyond TOSCA other templating schemes can be introduced in the future. Tacker started as a ServiceVM project under Neutron during the Atlanta Summit(2014). The evolution was slow and the interest totally dropped after the neutron vendor plugin decomposition. However the project had code assets for a template based lifecycle management of ServiceVMs. The remaining Tacker team members met in early 2015, brainstormed and eventually decided to pivot to ETSI NFV Orchestration use case. ETSI NFV envisions an Information Model based Orchestration and Life Cycle management of Virtual Network Functions. Since then Tacker developer community grew over the last three cycles and now getting contributions from a diverse set of participants [6] [7]. Tacker is also now actively collaborating with downstream OPNFV Projects like SFC [8], Parser [9] and Multisite [10]. Tacker functionality has been demonstrated in both OpenStack Vancouver and Tokyo Summits and in the recent OPNFV Summit. Tacker project strictly follows the Four Opens [11] suggested by OpenStack Foundation. Tacker code has been developed under an Apache 2.0 license. All code is submitted and reviewed through OpenStack Gerrit. The project maintains a core team that approves all changes. Bugs are filed, reviewed and tracked in Launchpad [12]. The project obeys the coordinated project interfaces, including tox, pbr, global-requirements, etc. Tacker gate now runs pep8, py27, docs tasks and a dsvm functional test. In summary, before Tacker, operators are expected to string together custom solutions using Heat, Ceilometer, etc. to achieve similar functionality. Tacker reduces such duplicate and complex effort in the industry by bringing together a community of NFV operators and VNF vendors to collaborate and build out a template based workflow engine and a higher level OpenStack "NFV Orchestration" API. [1] https://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_nfv002v010201p.pdf [2] http://docs.openstack.org/developer/networking-sfc/ [3] https://www.oasis-open.org/committees/tosca/ [4] http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/tosca-nfv-v1.0.html [5] https://github.com/openstack/tosca-parser [6] http://stackalytics.openstack.org/?project_type=openstack-others&module=tacker&metric=patches [7] http://stackalytics.com/report/contribution/tacker/90 [8] https://wiki.opnfv.org/service_function_chaining [9] https://wiki.opnfv.org/parser [10] https://wiki.opnfv.org/multisite [11] https://github.com/openstack/governance/blob/master/reference/opens.rst [12] https://launchpad.net/tacker [13] http://git.openstack.org/cgit/openstack/tacker [14] http://git.openstack.org/cgit/openstack/python-tackerclient [15] http://git.openstack.org/cgit/openstack/tacker-horizon [16] http://git.openstack.org/cgit/openstack/tacker-specs Change-Id: Idd7e9e8e4d428f9de28d8527e2f459b8ab8b8288
2016-02-04 00:12:41 +00:00
python-tackerclient:
repos:
- openstack/python-tackerclient
tacker-specs:
release-management: none
Applying Tacker for Big Tent Tacker is an OpenStack ecosystem project with a mission to build NFV Orchestration services for OpenStack. Tacker intends to support life-cycle management of Network Services and Virtual Network Functions (VNF). This service is designed to be compatible with the ETSI NFV Architecture Framework [1]. NFV Orchestration consists of two major components - VNF Manager and NFV Orchestrator. This project envisions to build both of these components under the OpenStack platform. The VNF Manager (VNFM) handles the life-cycle management of VNFs. It would instantiate VNFs on an OpenStack VIM and facilitate configuration, monitoring, healing and scaling of the VMs backing the VNF. Tacker plans to use many existing OpenStack services to realize these features. NFV Orchestrator (NFVO) provides end-to-end Network Service Orchestration. NFVO would in turn use VNFM component to stand up the VNFs composed in a Network Service. NFVO will also render VNF Forwarding Graphs (VNFFG) using a Service Function Chaining (SFC) API across the instantiated VNFs. Tacker intends to use Neutron's networking-sfc API [2] for this purpose. NFV work flows are typically described using templates. The current template schema of choice is TOSCA which is based on OASIS Standard [3]. At this moment TOSCA has the leading mindshare across VNF vendors and operators. Tacker project is working closely with OASIS TOSCA NFV standards group to shape the evolution of its Simple Profile for NFV [4]. The project is also leveraging and contributing to the Heat Translator's tosca-parser work [5]. Beyond TOSCA other templating schemes can be introduced in the future. Tacker started as a ServiceVM project under Neutron during the Atlanta Summit(2014). The evolution was slow and the interest totally dropped after the neutron vendor plugin decomposition. However the project had code assets for a template based lifecycle management of ServiceVMs. The remaining Tacker team members met in early 2015, brainstormed and eventually decided to pivot to ETSI NFV Orchestration use case. ETSI NFV envisions an Information Model based Orchestration and Life Cycle management of Virtual Network Functions. Since then Tacker developer community grew over the last three cycles and now getting contributions from a diverse set of participants [6] [7]. Tacker is also now actively collaborating with downstream OPNFV Projects like SFC [8], Parser [9] and Multisite [10]. Tacker functionality has been demonstrated in both OpenStack Vancouver and Tokyo Summits and in the recent OPNFV Summit. Tacker project strictly follows the Four Opens [11] suggested by OpenStack Foundation. Tacker code has been developed under an Apache 2.0 license. All code is submitted and reviewed through OpenStack Gerrit. The project maintains a core team that approves all changes. Bugs are filed, reviewed and tracked in Launchpad [12]. The project obeys the coordinated project interfaces, including tox, pbr, global-requirements, etc. Tacker gate now runs pep8, py27, docs tasks and a dsvm functional test. In summary, before Tacker, operators are expected to string together custom solutions using Heat, Ceilometer, etc. to achieve similar functionality. Tacker reduces such duplicate and complex effort in the industry by bringing together a community of NFV operators and VNF vendors to collaborate and build out a template based workflow engine and a higher level OpenStack "NFV Orchestration" API. [1] https://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_nfv002v010201p.pdf [2] http://docs.openstack.org/developer/networking-sfc/ [3] https://www.oasis-open.org/committees/tosca/ [4] http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/tosca-nfv-v1.0.html [5] https://github.com/openstack/tosca-parser [6] http://stackalytics.openstack.org/?project_type=openstack-others&module=tacker&metric=patches [7] http://stackalytics.com/report/contribution/tacker/90 [8] https://wiki.opnfv.org/service_function_chaining [9] https://wiki.opnfv.org/parser [10] https://wiki.opnfv.org/multisite [11] https://github.com/openstack/governance/blob/master/reference/opens.rst [12] https://launchpad.net/tacker [13] http://git.openstack.org/cgit/openstack/tacker [14] http://git.openstack.org/cgit/openstack/python-tackerclient [15] http://git.openstack.org/cgit/openstack/tacker-horizon [16] http://git.openstack.org/cgit/openstack/tacker-specs Change-Id: Idd7e9e8e4d428f9de28d8527e2f459b8ab8b8288
2016-02-04 00:12:41 +00:00
repos:
- openstack/tacker-specs
heat-translator:
repos:
- openstack/heat-translator
tosca-parser:
repos:
- openstack/tosca-parser
Telemetry:
ptl:
name: Erno Kuvaja
irc: jokke_
email: jokke@usr.fi
appointed:
- train
irc-channel: openstack-telemetry
service: Telemetry service
mission: >
To reliably collect measurements of the utilization of the physical and
virtual resources comprising deployed clouds, persist these data for
subsequent retrieval and analysis, and trigger actions when defined
criteria are met.
url: https://wiki.openstack.org/wiki/Telemetry
deliverables:
aodh:
repos:
- openstack/aodh
ceilometer:
repos:
- openstack/ceilometer
telemetry-specs:
release-management: none
repos:
- openstack/telemetry-specs
ceilometermiddleware:
repos:
- openstack/ceilometermiddleware
python-aodhclient:
repos:
- openstack/python-aodhclient
python-observabilityclient:
repos:
- openstack/python-observabilityclient
telemetry-tempest-plugin:
repos:
- openstack/telemetry-tempest-plugin
trove:
ptl:
name: wu chunyang
irc: wuchunyang
email: wchy1001@gmail.com
appointed:
- stein
- zed
- '2024.1'
- '2024.2'
irc-channel: openstack-trove
service: Database service
Add Solum to OpenStack Projects List Solum has achived a level of stability and maturity where all of its contributors feel it is ready to join OpenStack. Solum's mission is to make cloud services easier to consume and integrate with your application development process. Adoption of Solum will lead to more application developers using OpenStack as a cloud platform. It is directly built on OpenStack infrastructure services including Heat, Keystone, Glance, and Swift. Through Heat it also uses Nova, and soon will also use Magnum. Solum conforms to each of our four opens: Open Source ----------- Apache v2.0 License, no dependencies restrict deployment or redistribution. Open Development ---------------- We use Stackforge, including Gerrit for all code reviews with a test driven gate. The PTL is available as a liason for cross-project teams. We integrate with other projects in OpenStack to prevent overlap in our efforts. Solum conforms completely to the OpenStack development style. Open Design ----------- Our project direction is discussed openly in pubic forums including at multiple design summits and midcycle meetups. We use the openstack-dev mailing list to discuss issues. Open Community -------------- Solum's PTL is chosen by the community in open elections. We hold weekly IRC meetings in official OpenStack meeting channels. We are OpenStack API interoperable, using keystone middleware for discovery and authentication. We have an active team of dedicated contributors who are committed to the success of the project. Change-Id: Id74222381910f006a5a69f9e3746101bd4a32e12
2015-06-12 06:21:19 +00:00
mission: >
To provide scalable and reliable Cloud Database as a Service functionality
for both relational and non-relational database engines, and to continue to
improve its fully-featured and extensible open source framework.
url: https://wiki.openstack.org/wiki/Trove
deliverables:
python-troveclient:
repos:
- openstack/python-troveclient
trove:
repos:
- openstack/trove
trove-dashboard:
repos:
- openstack/trove-dashboard
trove-specs:
release-management: none
repos:
- openstack/trove-specs
trove-tempest-plugin:
repos:
- openstack/trove-tempest-plugin
Venus official project status Venus is a unified log management module, according to the keyrequirements of OpenStack in log storage, retrieval, analysis and so on. This module can provide a one-stop solution to log collection, cleaning, indexing, analysis, alarm, visualization, report generation etc.. Venus can help operator or maintainer to improve the level of platform management and the goal of the project is to make log retrieval no longer difficult, to make log typical error warnings become a reality. Venus project has been contributed to OpenStack Community in November 2020. At that time, functions such as collection, retrieval, and error statistics of platform log and service log were developed. In order to make the venus project easier and more practical, the functions of venus have been enriched and improved in the past six monthsas follows: 1. Improved and standardized the completed code, added some test codes[1]. 2. Venus-dashboard plug-in [2] based on Horizon is developed, which can provide operators with a log retrieval page, and the function is still being improved. 3. To make the deployment more versatile, besides deploy with devstack, kolla-based venus deployment script is submit[3]. 4. The typical log errors of the openstack platform are being collected, and the alarm function is also under development. The offline promotion of venus is also continuing. Venus is shared on Hackathon in April 2021, and attracted the interest of companies such as China Unicom, who express their requirements of Venus. At the upcoming OpenInfra Days China, Venus will to be promote so that more users and developers can known and use. In the future, in addition to improving existing functions, algorithms will be added to log fault location to enable Venus to play a greater role in intelligent operation and maintenance. [1] https://review.opendev.org/q/project:inspur/venus [2] https://review.opendev.org/q/project:inspur/venus-dashboard [3] https://review.opendev.org/q/topic:%22venus%22+(status:open%20OR%20status:merged) [4] ML discussion: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html Change-Id: Id82df49a776537bcd38b49a7f95fc0c10bc2f925
2021-08-17 14:27:09 +08:00
venus:
ptl:
name: Eric Zhang
irc: No nick supplied
email: zhanglf01@inspur.com
appointed:
- zed
- '2023.1'
- '2024.1'
Venus official project status Venus is a unified log management module, according to the keyrequirements of OpenStack in log storage, retrieval, analysis and so on. This module can provide a one-stop solution to log collection, cleaning, indexing, analysis, alarm, visualization, report generation etc.. Venus can help operator or maintainer to improve the level of platform management and the goal of the project is to make log retrieval no longer difficult, to make log typical error warnings become a reality. Venus project has been contributed to OpenStack Community in November 2020. At that time, functions such as collection, retrieval, and error statistics of platform log and service log were developed. In order to make the venus project easier and more practical, the functions of venus have been enriched and improved in the past six monthsas follows: 1. Improved and standardized the completed code, added some test codes[1]. 2. Venus-dashboard plug-in [2] based on Horizon is developed, which can provide operators with a log retrieval page, and the function is still being improved. 3. To make the deployment more versatile, besides deploy with devstack, kolla-based venus deployment script is submit[3]. 4. The typical log errors of the openstack platform are being collected, and the alarm function is also under development. The offline promotion of venus is also continuing. Venus is shared on Hackathon in April 2021, and attracted the interest of companies such as China Unicom, who express their requirements of Venus. At the upcoming OpenInfra Days China, Venus will to be promote so that more users and developers can known and use. In the future, in addition to improving existing functions, algorithms will be added to log fault location to enable Venus to play a greater role in intelligent operation and maintenance. [1] https://review.opendev.org/q/project:inspur/venus [2] https://review.opendev.org/q/project:inspur/venus-dashboard [3] https://review.opendev.org/q/topic:%22venus%22+(status:open%20OR%20status:merged) [4] ML discussion: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html Change-Id: Id82df49a776537bcd38b49a7f95fc0c10bc2f925
2021-08-17 14:27:09 +08:00
irc-channel: openstack-venus
service: Log management service
mission: >
Venus is a unified log management module, according to the key
requirements of OpenStack in log storage, retrieval, analysis
and so on. This module can provide a one-stop solution to log
collection, cleaning, indexing, analysis, alarm, visualization,
report generation etc..
url: https://wiki.openstack.org/wiki/Venus
deliverables:
venus:
repos:
- openstack/venus
Venus official project status Venus is a unified log management module, according to the keyrequirements of OpenStack in log storage, retrieval, analysis and so on. This module can provide a one-stop solution to log collection, cleaning, indexing, analysis, alarm, visualization, report generation etc.. Venus can help operator or maintainer to improve the level of platform management and the goal of the project is to make log retrieval no longer difficult, to make log typical error warnings become a reality. Venus project has been contributed to OpenStack Community in November 2020. At that time, functions such as collection, retrieval, and error statistics of platform log and service log were developed. In order to make the venus project easier and more practical, the functions of venus have been enriched and improved in the past six monthsas follows: 1. Improved and standardized the completed code, added some test codes[1]. 2. Venus-dashboard plug-in [2] based on Horizon is developed, which can provide operators with a log retrieval page, and the function is still being improved. 3. To make the deployment more versatile, besides deploy with devstack, kolla-based venus deployment script is submit[3]. 4. The typical log errors of the openstack platform are being collected, and the alarm function is also under development. The offline promotion of venus is also continuing. Venus is shared on Hackathon in April 2021, and attracted the interest of companies such as China Unicom, who express their requirements of Venus. At the upcoming OpenInfra Days China, Venus will to be promote so that more users and developers can known and use. In the future, in addition to improving existing functions, algorithms will be added to log fault location to enable Venus to play a greater role in intelligent operation and maintenance. [1] https://review.opendev.org/q/project:inspur/venus [2] https://review.opendev.org/q/project:inspur/venus-dashboard [3] https://review.opendev.org/q/topic:%22venus%22+(status:open%20OR%20status:merged) [4] ML discussion: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html Change-Id: Id82df49a776537bcd38b49a7f95fc0c10bc2f925
2021-08-17 14:27:09 +08:00
venus-specs:
release-management: none
repos:
- openstack/venus-specs
Venus official project status Venus is a unified log management module, according to the keyrequirements of OpenStack in log storage, retrieval, analysis and so on. This module can provide a one-stop solution to log collection, cleaning, indexing, analysis, alarm, visualization, report generation etc.. Venus can help operator or maintainer to improve the level of platform management and the goal of the project is to make log retrieval no longer difficult, to make log typical error warnings become a reality. Venus project has been contributed to OpenStack Community in November 2020. At that time, functions such as collection, retrieval, and error statistics of platform log and service log were developed. In order to make the venus project easier and more practical, the functions of venus have been enriched and improved in the past six monthsas follows: 1. Improved and standardized the completed code, added some test codes[1]. 2. Venus-dashboard plug-in [2] based on Horizon is developed, which can provide operators with a log retrieval page, and the function is still being improved. 3. To make the deployment more versatile, besides deploy with devstack, kolla-based venus deployment script is submit[3]. 4. The typical log errors of the openstack platform are being collected, and the alarm function is also under development. The offline promotion of venus is also continuing. Venus is shared on Hackathon in April 2021, and attracted the interest of companies such as China Unicom, who express their requirements of Venus. At the upcoming OpenInfra Days China, Venus will to be promote so that more users and developers can known and use. In the future, in addition to improving existing functions, algorithms will be added to log fault location to enable Venus to play a greater role in intelligent operation and maintenance. [1] https://review.opendev.org/q/project:inspur/venus [2] https://review.opendev.org/q/project:inspur/venus-dashboard [3] https://review.opendev.org/q/topic:%22venus%22+(status:open%20OR%20status:merged) [4] ML discussion: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html Change-Id: Id82df49a776537bcd38b49a7f95fc0c10bc2f925
2021-08-17 14:27:09 +08:00
venus-dashboard:
repos:
- openstack/venus-dashboard
Venus official project status Venus is a unified log management module, according to the keyrequirements of OpenStack in log storage, retrieval, analysis and so on. This module can provide a one-stop solution to log collection, cleaning, indexing, analysis, alarm, visualization, report generation etc.. Venus can help operator or maintainer to improve the level of platform management and the goal of the project is to make log retrieval no longer difficult, to make log typical error warnings become a reality. Venus project has been contributed to OpenStack Community in November 2020. At that time, functions such as collection, retrieval, and error statistics of platform log and service log were developed. In order to make the venus project easier and more practical, the functions of venus have been enriched and improved in the past six monthsas follows: 1. Improved and standardized the completed code, added some test codes[1]. 2. Venus-dashboard plug-in [2] based on Horizon is developed, which can provide operators with a log retrieval page, and the function is still being improved. 3. To make the deployment more versatile, besides deploy with devstack, kolla-based venus deployment script is submit[3]. 4. The typical log errors of the openstack platform are being collected, and the alarm function is also under development. The offline promotion of venus is also continuing. Venus is shared on Hackathon in April 2021, and attracted the interest of companies such as China Unicom, who express their requirements of Venus. At the upcoming OpenInfra Days China, Venus will to be promote so that more users and developers can known and use. In the future, in addition to improving existing functions, algorithms will be added to log fault location to enable Venus to play a greater role in intelligent operation and maintenance. [1] https://review.opendev.org/q/project:inspur/venus [2] https://review.opendev.org/q/project:inspur/venus-dashboard [3] https://review.opendev.org/q/topic:%22venus%22+(status:open%20OR%20status:merged) [4] ML discussion: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html Change-Id: Id82df49a776537bcd38b49a7f95fc0c10bc2f925
2021-08-17 14:27:09 +08:00
python-venusclient:
repos:
- openstack/python-venusclient
Venus official project status Venus is a unified log management module, according to the keyrequirements of OpenStack in log storage, retrieval, analysis and so on. This module can provide a one-stop solution to log collection, cleaning, indexing, analysis, alarm, visualization, report generation etc.. Venus can help operator or maintainer to improve the level of platform management and the goal of the project is to make log retrieval no longer difficult, to make log typical error warnings become a reality. Venus project has been contributed to OpenStack Community in November 2020. At that time, functions such as collection, retrieval, and error statistics of platform log and service log were developed. In order to make the venus project easier and more practical, the functions of venus have been enriched and improved in the past six monthsas follows: 1. Improved and standardized the completed code, added some test codes[1]. 2. Venus-dashboard plug-in [2] based on Horizon is developed, which can provide operators with a log retrieval page, and the function is still being improved. 3. To make the deployment more versatile, besides deploy with devstack, kolla-based venus deployment script is submit[3]. 4. The typical log errors of the openstack platform are being collected, and the alarm function is also under development. The offline promotion of venus is also continuing. Venus is shared on Hackathon in April 2021, and attracted the interest of companies such as China Unicom, who express their requirements of Venus. At the upcoming OpenInfra Days China, Venus will to be promote so that more users and developers can known and use. In the future, in addition to improving existing functions, algorithms will be added to log fault location to enable Venus to play a greater role in intelligent operation and maintenance. [1] https://review.opendev.org/q/project:inspur/venus [2] https://review.opendev.org/q/project:inspur/venus-dashboard [3] https://review.opendev.org/q/topic:%22venus%22+(status:open%20OR%20status:merged) [4] ML discussion: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html Change-Id: Id82df49a776537bcd38b49a7f95fc0c10bc2f925
2021-08-17 14:27:09 +08:00
venus-tempest-plugin:
repos:
- openstack/venus-tempest-plugin
Add project Vitrage to OpenStack big-tent This is a request to add the Vitrage project[1] to the Big Tent. Vitrage aims to be the OpenStack RCA (Root Cause Analysis) Engine for organizing, analyzing and expanding OpenStack alarms & events, yielding insights regarding the root cause of problems and deducing their existence before they are directly detected. Vitrage provides an holistic and complete view of the cloud, with a clear visualization of the physical-to-virtual (and soon the applicative) layers. Vitrage receives information from various OpenStack data sources (Aodh, Nova, Cinder and Neutron), as well as from external data sources (like Nagios, and in the future Zabbix) and combines it in a topology graph database. This topology is used to deduce that certain problems in the physical layer should cause problems in the virtual layer; to trigger corresponding alarms; to modify (currently only in Vitrage) resources states; and to reflect the causal relationship of the alarms in the cloud. Vitrage development started right after Mitaka Summit, where we had meetings with other OpenStack projects (like Ceilometer/Aodh), presented to them Vitrage ideas, verified that there was no overlap between our projects, and got their blessing. Vitrage follows OpenStack guidelines and “four opens” from day one. Vitrage code and specifications are entirely open source. The project is fully licensed under the Apache 2.0 license, and the project uses public code reviews in gerrit. The code can be found here[2][3][4][5]. Vitrage gate runs docs, pep8, python27, python34 and tempest tests. The #openstack-vitrage IRC channel on freenode is logged [6]. Vitrage team holds weekly IRC meetings on OpenStack official meeting channel, which is logged as well[7]. Openstack-dev mailing list is used to publicly discuss Vitrage related issues. Vitrage wiki page[1] consists of high-level and low-level design documents, presentations and demos. OpenStack launchpad[8] is used to manage blueprints and bugs. We started the process of Newton design discussions right after Austin Summit. We perform these discussions in the IRC meetings, in the mailing list, and in etherpads[9][10] created for this purpose. Vitrage was presented in two sessions in Austin Summit[11][12], as well as in demos in the marketplace. In addition, Aodh-Vitrage integration was discussed in Aodh design session[13][14]. One Aodh fix was already made to enable this integration[15]. Other integrations, with Monasca and Congress, were discussed during the summit, and we are continuing such discussions in the mailing list. In addition to OpenStack community, Vitrage is also involved in OPNFV projects. Vitrage is considered a reference implementation for OPNFV Doctor Inspector component [16][17], as well as for OPNFV PinPoint[18] project. Vitrage is about to be presented, together with OPNFV Doctor and with OpenStack Congress, in OPNFV Summit in Berlin[19]. Vitrage demo is going to be presented in Doctor POC booth. For the past six months, since Vitrage development began, we have been encouraging interaction and cooperation with other projects and contributors in OpenStack. To date, a few companies and projects have worked with us on formulating requirements as well as contributed code. We are working hard at involving additional players in Vitrage, to help drive its feature set so that it supports a wide range of needs from the community. To summarize, Vitrage fills a real need in OpenStack. Vitrage holistic and complete view that includes all of the layers of the cloud, as well as its alarm analysis and correlation, is currently lacking in OpenStack. As Vitrage enhances OpenStack capabilities, raises a lot of interest in the community, and is developed by OpenStack standards, we believe it belongs to the big tent. [1] https://wiki.openstack.org/wiki/Vitrage [2] https://github.com/openstack/vitrage [3] https://github.com/openstack/vitrage-dashboard [4] https://github.com/openstack/python-vitrageclient [5] https://github.com/openstack/vitrage-specs [6] http://eavesdrop.openstack.org/irclogs/%23openstack-vitrage/ [7] http://eavesdrop.openstack.org/meetings/vitrage/ [8] https://launchpad.net/vitrage [9] https://etherpad.openstack.org/p/vitrage-newton-planning [10] https://etherpad.openstack.org/p/vitrage-overlapping-templates-support-design [11] https://www.youtube.com/watch?v=9Qw5coTLgMo [12] https://www.youtube.com/watch?v=ey68KNKXc5c [13] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9203 [14] https://etherpad.openstack.org/p/newton-telemetry-vitrage [15] https://review.openstack.org/#/c/314683/ [16] https://wiki.opnfv.org/display/doctor/Doctor+Home [17] https://jira.opnfv.org/browse/DOCTOR-57 [18] https://wiki.opnfv.org/display/pinpoint/Pinpoint+Home [19] https://www.eventscribe.com/2016/OPNFV/aaSearchByDay.asp?h=Full%20Schedule&BCFO=P|G see “Failure Inspection in Doctor using Vitrage and Congress” Change-Id: I47f7302da3d35a4620fc910c38384b4b4c71de7d
2016-05-24 08:03:53 +00:00
vitrage:
ptl:
name: Dmitriy Rabotyagov
irc: noonedeadpunk
email: noonedeadpunk@gmail.com
appointed:
- '2023.2'
Add project Vitrage to OpenStack big-tent This is a request to add the Vitrage project[1] to the Big Tent. Vitrage aims to be the OpenStack RCA (Root Cause Analysis) Engine for organizing, analyzing and expanding OpenStack alarms & events, yielding insights regarding the root cause of problems and deducing their existence before they are directly detected. Vitrage provides an holistic and complete view of the cloud, with a clear visualization of the physical-to-virtual (and soon the applicative) layers. Vitrage receives information from various OpenStack data sources (Aodh, Nova, Cinder and Neutron), as well as from external data sources (like Nagios, and in the future Zabbix) and combines it in a topology graph database. This topology is used to deduce that certain problems in the physical layer should cause problems in the virtual layer; to trigger corresponding alarms; to modify (currently only in Vitrage) resources states; and to reflect the causal relationship of the alarms in the cloud. Vitrage development started right after Mitaka Summit, where we had meetings with other OpenStack projects (like Ceilometer/Aodh), presented to them Vitrage ideas, verified that there was no overlap between our projects, and got their blessing. Vitrage follows OpenStack guidelines and “four opens” from day one. Vitrage code and specifications are entirely open source. The project is fully licensed under the Apache 2.0 license, and the project uses public code reviews in gerrit. The code can be found here[2][3][4][5]. Vitrage gate runs docs, pep8, python27, python34 and tempest tests. The #openstack-vitrage IRC channel on freenode is logged [6]. Vitrage team holds weekly IRC meetings on OpenStack official meeting channel, which is logged as well[7]. Openstack-dev mailing list is used to publicly discuss Vitrage related issues. Vitrage wiki page[1] consists of high-level and low-level design documents, presentations and demos. OpenStack launchpad[8] is used to manage blueprints and bugs. We started the process of Newton design discussions right after Austin Summit. We perform these discussions in the IRC meetings, in the mailing list, and in etherpads[9][10] created for this purpose. Vitrage was presented in two sessions in Austin Summit[11][12], as well as in demos in the marketplace. In addition, Aodh-Vitrage integration was discussed in Aodh design session[13][14]. One Aodh fix was already made to enable this integration[15]. Other integrations, with Monasca and Congress, were discussed during the summit, and we are continuing such discussions in the mailing list. In addition to OpenStack community, Vitrage is also involved in OPNFV projects. Vitrage is considered a reference implementation for OPNFV Doctor Inspector component [16][17], as well as for OPNFV PinPoint[18] project. Vitrage is about to be presented, together with OPNFV Doctor and with OpenStack Congress, in OPNFV Summit in Berlin[19]. Vitrage demo is going to be presented in Doctor POC booth. For the past six months, since Vitrage development began, we have been encouraging interaction and cooperation with other projects and contributors in OpenStack. To date, a few companies and projects have worked with us on formulating requirements as well as contributed code. We are working hard at involving additional players in Vitrage, to help drive its feature set so that it supports a wide range of needs from the community. To summarize, Vitrage fills a real need in OpenStack. Vitrage holistic and complete view that includes all of the layers of the cloud, as well as its alarm analysis and correlation, is currently lacking in OpenStack. As Vitrage enhances OpenStack capabilities, raises a lot of interest in the community, and is developed by OpenStack standards, we believe it belongs to the big tent. [1] https://wiki.openstack.org/wiki/Vitrage [2] https://github.com/openstack/vitrage [3] https://github.com/openstack/vitrage-dashboard [4] https://github.com/openstack/python-vitrageclient [5] https://github.com/openstack/vitrage-specs [6] http://eavesdrop.openstack.org/irclogs/%23openstack-vitrage/ [7] http://eavesdrop.openstack.org/meetings/vitrage/ [8] https://launchpad.net/vitrage [9] https://etherpad.openstack.org/p/vitrage-newton-planning [10] https://etherpad.openstack.org/p/vitrage-overlapping-templates-support-design [11] https://www.youtube.com/watch?v=9Qw5coTLgMo [12] https://www.youtube.com/watch?v=ey68KNKXc5c [13] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9203 [14] https://etherpad.openstack.org/p/newton-telemetry-vitrage [15] https://review.openstack.org/#/c/314683/ [16] https://wiki.opnfv.org/display/doctor/Doctor+Home [17] https://jira.opnfv.org/browse/DOCTOR-57 [18] https://wiki.opnfv.org/display/pinpoint/Pinpoint+Home [19] https://www.eventscribe.com/2016/OPNFV/aaSearchByDay.asp?h=Full%20Schedule&BCFO=P|G see “Failure Inspection in Doctor using Vitrage and Congress” Change-Id: I47f7302da3d35a4620fc910c38384b4b4c71de7d
2016-05-24 08:03:53 +00:00
irc-channel: openstack-vitrage
service: RCA (Root Cause Analysis) service
mission: >
To organize, analyze and visualize OpenStack alarms & events, yield
insights regarding the root cause of problems and deduce their existence
before they are directly detected.
url: https://wiki.openstack.org/wiki/Vitrage
deliverables:
vitrage:
repos:
- openstack/vitrage
vitrage-specs:
release-management: none
Add project Vitrage to OpenStack big-tent This is a request to add the Vitrage project[1] to the Big Tent. Vitrage aims to be the OpenStack RCA (Root Cause Analysis) Engine for organizing, analyzing and expanding OpenStack alarms & events, yielding insights regarding the root cause of problems and deducing their existence before they are directly detected. Vitrage provides an holistic and complete view of the cloud, with a clear visualization of the physical-to-virtual (and soon the applicative) layers. Vitrage receives information from various OpenStack data sources (Aodh, Nova, Cinder and Neutron), as well as from external data sources (like Nagios, and in the future Zabbix) and combines it in a topology graph database. This topology is used to deduce that certain problems in the physical layer should cause problems in the virtual layer; to trigger corresponding alarms; to modify (currently only in Vitrage) resources states; and to reflect the causal relationship of the alarms in the cloud. Vitrage development started right after Mitaka Summit, where we had meetings with other OpenStack projects (like Ceilometer/Aodh), presented to them Vitrage ideas, verified that there was no overlap between our projects, and got their blessing. Vitrage follows OpenStack guidelines and “four opens” from day one. Vitrage code and specifications are entirely open source. The project is fully licensed under the Apache 2.0 license, and the project uses public code reviews in gerrit. The code can be found here[2][3][4][5]. Vitrage gate runs docs, pep8, python27, python34 and tempest tests. The #openstack-vitrage IRC channel on freenode is logged [6]. Vitrage team holds weekly IRC meetings on OpenStack official meeting channel, which is logged as well[7]. Openstack-dev mailing list is used to publicly discuss Vitrage related issues. Vitrage wiki page[1] consists of high-level and low-level design documents, presentations and demos. OpenStack launchpad[8] is used to manage blueprints and bugs. We started the process of Newton design discussions right after Austin Summit. We perform these discussions in the IRC meetings, in the mailing list, and in etherpads[9][10] created for this purpose. Vitrage was presented in two sessions in Austin Summit[11][12], as well as in demos in the marketplace. In addition, Aodh-Vitrage integration was discussed in Aodh design session[13][14]. One Aodh fix was already made to enable this integration[15]. Other integrations, with Monasca and Congress, were discussed during the summit, and we are continuing such discussions in the mailing list. In addition to OpenStack community, Vitrage is also involved in OPNFV projects. Vitrage is considered a reference implementation for OPNFV Doctor Inspector component [16][17], as well as for OPNFV PinPoint[18] project. Vitrage is about to be presented, together with OPNFV Doctor and with OpenStack Congress, in OPNFV Summit in Berlin[19]. Vitrage demo is going to be presented in Doctor POC booth. For the past six months, since Vitrage development began, we have been encouraging interaction and cooperation with other projects and contributors in OpenStack. To date, a few companies and projects have worked with us on formulating requirements as well as contributed code. We are working hard at involving additional players in Vitrage, to help drive its feature set so that it supports a wide range of needs from the community. To summarize, Vitrage fills a real need in OpenStack. Vitrage holistic and complete view that includes all of the layers of the cloud, as well as its alarm analysis and correlation, is currently lacking in OpenStack. As Vitrage enhances OpenStack capabilities, raises a lot of interest in the community, and is developed by OpenStack standards, we believe it belongs to the big tent. [1] https://wiki.openstack.org/wiki/Vitrage [2] https://github.com/openstack/vitrage [3] https://github.com/openstack/vitrage-dashboard [4] https://github.com/openstack/python-vitrageclient [5] https://github.com/openstack/vitrage-specs [6] http://eavesdrop.openstack.org/irclogs/%23openstack-vitrage/ [7] http://eavesdrop.openstack.org/meetings/vitrage/ [8] https://launchpad.net/vitrage [9] https://etherpad.openstack.org/p/vitrage-newton-planning [10] https://etherpad.openstack.org/p/vitrage-overlapping-templates-support-design [11] https://www.youtube.com/watch?v=9Qw5coTLgMo [12] https://www.youtube.com/watch?v=ey68KNKXc5c [13] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9203 [14] https://etherpad.openstack.org/p/newton-telemetry-vitrage [15] https://review.openstack.org/#/c/314683/ [16] https://wiki.opnfv.org/display/doctor/Doctor+Home [17] https://jira.opnfv.org/browse/DOCTOR-57 [18] https://wiki.opnfv.org/display/pinpoint/Pinpoint+Home [19] https://www.eventscribe.com/2016/OPNFV/aaSearchByDay.asp?h=Full%20Schedule&BCFO=P|G see “Failure Inspection in Doctor using Vitrage and Congress” Change-Id: I47f7302da3d35a4620fc910c38384b4b4c71de7d
2016-05-24 08:03:53 +00:00
repos:
- openstack/vitrage-specs
vitrage-tempest-plugin:
repos:
- openstack/vitrage-tempest-plugin
Add project Vitrage to OpenStack big-tent This is a request to add the Vitrage project[1] to the Big Tent. Vitrage aims to be the OpenStack RCA (Root Cause Analysis) Engine for organizing, analyzing and expanding OpenStack alarms & events, yielding insights regarding the root cause of problems and deducing their existence before they are directly detected. Vitrage provides an holistic and complete view of the cloud, with a clear visualization of the physical-to-virtual (and soon the applicative) layers. Vitrage receives information from various OpenStack data sources (Aodh, Nova, Cinder and Neutron), as well as from external data sources (like Nagios, and in the future Zabbix) and combines it in a topology graph database. This topology is used to deduce that certain problems in the physical layer should cause problems in the virtual layer; to trigger corresponding alarms; to modify (currently only in Vitrage) resources states; and to reflect the causal relationship of the alarms in the cloud. Vitrage development started right after Mitaka Summit, where we had meetings with other OpenStack projects (like Ceilometer/Aodh), presented to them Vitrage ideas, verified that there was no overlap between our projects, and got their blessing. Vitrage follows OpenStack guidelines and “four opens” from day one. Vitrage code and specifications are entirely open source. The project is fully licensed under the Apache 2.0 license, and the project uses public code reviews in gerrit. The code can be found here[2][3][4][5]. Vitrage gate runs docs, pep8, python27, python34 and tempest tests. The #openstack-vitrage IRC channel on freenode is logged [6]. Vitrage team holds weekly IRC meetings on OpenStack official meeting channel, which is logged as well[7]. Openstack-dev mailing list is used to publicly discuss Vitrage related issues. Vitrage wiki page[1] consists of high-level and low-level design documents, presentations and demos. OpenStack launchpad[8] is used to manage blueprints and bugs. We started the process of Newton design discussions right after Austin Summit. We perform these discussions in the IRC meetings, in the mailing list, and in etherpads[9][10] created for this purpose. Vitrage was presented in two sessions in Austin Summit[11][12], as well as in demos in the marketplace. In addition, Aodh-Vitrage integration was discussed in Aodh design session[13][14]. One Aodh fix was already made to enable this integration[15]. Other integrations, with Monasca and Congress, were discussed during the summit, and we are continuing such discussions in the mailing list. In addition to OpenStack community, Vitrage is also involved in OPNFV projects. Vitrage is considered a reference implementation for OPNFV Doctor Inspector component [16][17], as well as for OPNFV PinPoint[18] project. Vitrage is about to be presented, together with OPNFV Doctor and with OpenStack Congress, in OPNFV Summit in Berlin[19]. Vitrage demo is going to be presented in Doctor POC booth. For the past six months, since Vitrage development began, we have been encouraging interaction and cooperation with other projects and contributors in OpenStack. To date, a few companies and projects have worked with us on formulating requirements as well as contributed code. We are working hard at involving additional players in Vitrage, to help drive its feature set so that it supports a wide range of needs from the community. To summarize, Vitrage fills a real need in OpenStack. Vitrage holistic and complete view that includes all of the layers of the cloud, as well as its alarm analysis and correlation, is currently lacking in OpenStack. As Vitrage enhances OpenStack capabilities, raises a lot of interest in the community, and is developed by OpenStack standards, we believe it belongs to the big tent. [1] https://wiki.openstack.org/wiki/Vitrage [2] https://github.com/openstack/vitrage [3] https://github.com/openstack/vitrage-dashboard [4] https://github.com/openstack/python-vitrageclient [5] https://github.com/openstack/vitrage-specs [6] http://eavesdrop.openstack.org/irclogs/%23openstack-vitrage/ [7] http://eavesdrop.openstack.org/meetings/vitrage/ [8] https://launchpad.net/vitrage [9] https://etherpad.openstack.org/p/vitrage-newton-planning [10] https://etherpad.openstack.org/p/vitrage-overlapping-templates-support-design [11] https://www.youtube.com/watch?v=9Qw5coTLgMo [12] https://www.youtube.com/watch?v=ey68KNKXc5c [13] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9203 [14] https://etherpad.openstack.org/p/newton-telemetry-vitrage [15] https://review.openstack.org/#/c/314683/ [16] https://wiki.opnfv.org/display/doctor/Doctor+Home [17] https://jira.opnfv.org/browse/DOCTOR-57 [18] https://wiki.opnfv.org/display/pinpoint/Pinpoint+Home [19] https://www.eventscribe.com/2016/OPNFV/aaSearchByDay.asp?h=Full%20Schedule&BCFO=P|G see “Failure Inspection in Doctor using Vitrage and Congress” Change-Id: I47f7302da3d35a4620fc910c38384b4b4c71de7d
2016-05-24 08:03:53 +00:00
python-vitrageclient:
repos:
- openstack/python-vitrageclient
vitrage-dashboard:
repos:
- openstack/vitrage-dashboard
xstatic-dagre:
repos:
- openstack/xstatic-dagre
xstatic-dagre-d3:
repos:
- openstack/xstatic-dagre-d3
xstatic-graphlib:
repos:
- openstack/xstatic-graphlib
xstatic-lodash:
repos:
- openstack/xstatic-lodash
xstatic-moment:
repos:
- openstack/xstatic-moment
xstatic-moment-timezone:
repos:
- openstack/xstatic-moment-timezone
Add project Watcher to OpenStack big-tent Mission Alignment: Watcher's goal is to provide a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds. Watcher does this with a robust framework that can be configured to achieve a wide range of cloud optimization goals, including the reduction of data center operating costs, increased system performance via intelligent virtual machine migration, increased energy efficiency — and more! Watcher builds an extensible platform for advanced optimization services that can be used by operators to reduce the TCO of their infrastructure. The Watcher project maintains a core team for both code and specs that approves all changes. The Watcher API uses Keystone for authentication and multi-tenancy. Oslo libraries are used by all components where applicable. Watcher uses Telemetry as the default backend for metrics collection and plans to integrate Monasca during the Newton cycle. An integration with Congress is currently under discussion [4]. Currently, the Watcher PTL is Antoine Cabot (acabot). Regular elections are held to elect new PTLs and core members. Watcher was initially discussed at the Vancouver Summit. The first Watcher mid-cycle meetup was held in January 2016, which three companies attended [6]. Watcher has an extensive set of documentation. Overall documentation and links to documentation are at the Watcher Wiki [7]. The Watcher API is documented [8]. Watcher provides a turn-key development environment based on Devstack [9]. Watcher is continually deployed to test and production environments off of master branch and maintains a very high level of quality. Following the 4 opens : Open Source: Watcher is 100% open source under an Apache 2.0 license, everything from implementation to design to future features and roadmap are happening and shared openly and presented in various meetups and documents. Open Community: We are working closely with the community to share use cases and problems and decide on future priorites for the project. We welcome any feedback and any desire to help us shape Watcher future by others. Open Development: All Watcher code and specs are being reviewed in OpenStack CI [1] [2]. Watcher has a core team which openly discuss all issues. Watcher support gate tests which run unit tests/fullstack/tempest and Rally tests. Bugs are managed by OpenStack launchpad [3]. Open Design: All designs and specs are published for review and managed in OpenStack launchpad [3]. Watcher conducts a weekly IRC meeting to discuss all designs and future roadmap [5]. Watcher repository consists of many documentation and diagrams explaining the current design and code and also future roadmap and ideas. Everything is discussed openly on the Watcher IRC channel (which is logged [10]) or the ML. [1] https://review.openstack.org/#/q/project:openstack/watcher [2] https://review.openstack.org/#/q/project:openstack/watcher-specs [3] https://launchpad.net/watcher [4] http://lists.openstack.org/pipermail/openstack-dev/2016-May/093826.html [5] http://eavesdrop.openstack.org/meetings/watcher/ [6] https://etherpad.openstack.org/p/mitaka-watcher-midcycle [7] https://wiki.openstack.org/wiki/Watcher [8] http://factory.b-com.com/www/watcher/doc/watcher/webapi/v1.html [9] http://factory.b-com.com/www/watcher/doc/watcher/dev/devstack.html [10] http://eavesdrop.openstack.org/irclogs/%23openstack-watcher/ Change-Id: Ia948b4532748928134eb2cba3046dae498f4d206
2016-05-25 14:11:05 +02:00
watcher:
leadership_type: distributed
liaisons:
release:
- name: Sean Mooney
irc: sean-k-mooney
email: smooney@redhat.com
tact-sig:
- name: Marios Andreou
irc: marios
email: marios@redhat.com
- name: Chandan Kumar
irc: chandankumar
email: chkumar@redhat.com
security:
- name: Dan Smith
irc: dansmith
email: danms@danplanet.com
tc-liaison:
- name: Ghanshyam Mann
irc: gmann
email: gmann@ghanshyammann.com
Add project Watcher to OpenStack big-tent Mission Alignment: Watcher's goal is to provide a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds. Watcher does this with a robust framework that can be configured to achieve a wide range of cloud optimization goals, including the reduction of data center operating costs, increased system performance via intelligent virtual machine migration, increased energy efficiency — and more! Watcher builds an extensible platform for advanced optimization services that can be used by operators to reduce the TCO of their infrastructure. The Watcher project maintains a core team for both code and specs that approves all changes. The Watcher API uses Keystone for authentication and multi-tenancy. Oslo libraries are used by all components where applicable. Watcher uses Telemetry as the default backend for metrics collection and plans to integrate Monasca during the Newton cycle. An integration with Congress is currently under discussion [4]. Currently, the Watcher PTL is Antoine Cabot (acabot). Regular elections are held to elect new PTLs and core members. Watcher was initially discussed at the Vancouver Summit. The first Watcher mid-cycle meetup was held in January 2016, which three companies attended [6]. Watcher has an extensive set of documentation. Overall documentation and links to documentation are at the Watcher Wiki [7]. The Watcher API is documented [8]. Watcher provides a turn-key development environment based on Devstack [9]. Watcher is continually deployed to test and production environments off of master branch and maintains a very high level of quality. Following the 4 opens : Open Source: Watcher is 100% open source under an Apache 2.0 license, everything from implementation to design to future features and roadmap are happening and shared openly and presented in various meetups and documents. Open Community: We are working closely with the community to share use cases and problems and decide on future priorites for the project. We welcome any feedback and any desire to help us shape Watcher future by others. Open Development: All Watcher code and specs are being reviewed in OpenStack CI [1] [2]. Watcher has a core team which openly discuss all issues. Watcher support gate tests which run unit tests/fullstack/tempest and Rally tests. Bugs are managed by OpenStack launchpad [3]. Open Design: All designs and specs are published for review and managed in OpenStack launchpad [3]. Watcher conducts a weekly IRC meeting to discuss all designs and future roadmap [5]. Watcher repository consists of many documentation and diagrams explaining the current design and code and also future roadmap and ideas. Everything is discussed openly on the Watcher IRC channel (which is logged [10]) or the ML. [1] https://review.openstack.org/#/q/project:openstack/watcher [2] https://review.openstack.org/#/q/project:openstack/watcher-specs [3] https://launchpad.net/watcher [4] http://lists.openstack.org/pipermail/openstack-dev/2016-May/093826.html [5] http://eavesdrop.openstack.org/meetings/watcher/ [6] https://etherpad.openstack.org/p/mitaka-watcher-midcycle [7] https://wiki.openstack.org/wiki/Watcher [8] http://factory.b-com.com/www/watcher/doc/watcher/webapi/v1.html [9] http://factory.b-com.com/www/watcher/doc/watcher/dev/devstack.html [10] http://eavesdrop.openstack.org/irclogs/%23openstack-watcher/ Change-Id: Ia948b4532748928134eb2cba3046dae498f4d206
2016-05-25 14:11:05 +02:00
irc-channel: openstack-watcher
service: Infrastructure Optimization service
mission: >
Watcher's goal is to provide a flexible and scalable resource optimization
service for multi-tenant OpenStack-based clouds.
url: https://wiki.openstack.org/wiki/Watcher
deliverables:
watcher:
repos:
- openstack/watcher
watcher-specs:
release-management: none
Add project Watcher to OpenStack big-tent Mission Alignment: Watcher's goal is to provide a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds. Watcher does this with a robust framework that can be configured to achieve a wide range of cloud optimization goals, including the reduction of data center operating costs, increased system performance via intelligent virtual machine migration, increased energy efficiency — and more! Watcher builds an extensible platform for advanced optimization services that can be used by operators to reduce the TCO of their infrastructure. The Watcher project maintains a core team for both code and specs that approves all changes. The Watcher API uses Keystone for authentication and multi-tenancy. Oslo libraries are used by all components where applicable. Watcher uses Telemetry as the default backend for metrics collection and plans to integrate Monasca during the Newton cycle. An integration with Congress is currently under discussion [4]. Currently, the Watcher PTL is Antoine Cabot (acabot). Regular elections are held to elect new PTLs and core members. Watcher was initially discussed at the Vancouver Summit. The first Watcher mid-cycle meetup was held in January 2016, which three companies attended [6]. Watcher has an extensive set of documentation. Overall documentation and links to documentation are at the Watcher Wiki [7]. The Watcher API is documented [8]. Watcher provides a turn-key development environment based on Devstack [9]. Watcher is continually deployed to test and production environments off of master branch and maintains a very high level of quality. Following the 4 opens : Open Source: Watcher is 100% open source under an Apache 2.0 license, everything from implementation to design to future features and roadmap are happening and shared openly and presented in various meetups and documents. Open Community: We are working closely with the community to share use cases and problems and decide on future priorites for the project. We welcome any feedback and any desire to help us shape Watcher future by others. Open Development: All Watcher code and specs are being reviewed in OpenStack CI [1] [2]. Watcher has a core team which openly discuss all issues. Watcher support gate tests which run unit tests/fullstack/tempest and Rally tests. Bugs are managed by OpenStack launchpad [3]. Open Design: All designs and specs are published for review and managed in OpenStack launchpad [3]. Watcher conducts a weekly IRC meeting to discuss all designs and future roadmap [5]. Watcher repository consists of many documentation and diagrams explaining the current design and code and also future roadmap and ideas. Everything is discussed openly on the Watcher IRC channel (which is logged [10]) or the ML. [1] https://review.openstack.org/#/q/project:openstack/watcher [2] https://review.openstack.org/#/q/project:openstack/watcher-specs [3] https://launchpad.net/watcher [4] http://lists.openstack.org/pipermail/openstack-dev/2016-May/093826.html [5] http://eavesdrop.openstack.org/meetings/watcher/ [6] https://etherpad.openstack.org/p/mitaka-watcher-midcycle [7] https://wiki.openstack.org/wiki/Watcher [8] http://factory.b-com.com/www/watcher/doc/watcher/webapi/v1.html [9] http://factory.b-com.com/www/watcher/doc/watcher/dev/devstack.html [10] http://eavesdrop.openstack.org/irclogs/%23openstack-watcher/ Change-Id: Ia948b4532748928134eb2cba3046dae498f4d206
2016-05-25 14:11:05 +02:00
repos:
- openstack/watcher-specs
watcher-tempest-plugin:
repos:
- openstack/watcher-tempest-plugin
Add project Watcher to OpenStack big-tent Mission Alignment: Watcher's goal is to provide a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds. Watcher does this with a robust framework that can be configured to achieve a wide range of cloud optimization goals, including the reduction of data center operating costs, increased system performance via intelligent virtual machine migration, increased energy efficiency — and more! Watcher builds an extensible platform for advanced optimization services that can be used by operators to reduce the TCO of their infrastructure. The Watcher project maintains a core team for both code and specs that approves all changes. The Watcher API uses Keystone for authentication and multi-tenancy. Oslo libraries are used by all components where applicable. Watcher uses Telemetry as the default backend for metrics collection and plans to integrate Monasca during the Newton cycle. An integration with Congress is currently under discussion [4]. Currently, the Watcher PTL is Antoine Cabot (acabot). Regular elections are held to elect new PTLs and core members. Watcher was initially discussed at the Vancouver Summit. The first Watcher mid-cycle meetup was held in January 2016, which three companies attended [6]. Watcher has an extensive set of documentation. Overall documentation and links to documentation are at the Watcher Wiki [7]. The Watcher API is documented [8]. Watcher provides a turn-key development environment based on Devstack [9]. Watcher is continually deployed to test and production environments off of master branch and maintains a very high level of quality. Following the 4 opens : Open Source: Watcher is 100% open source under an Apache 2.0 license, everything from implementation to design to future features and roadmap are happening and shared openly and presented in various meetups and documents. Open Community: We are working closely with the community to share use cases and problems and decide on future priorites for the project. We welcome any feedback and any desire to help us shape Watcher future by others. Open Development: All Watcher code and specs are being reviewed in OpenStack CI [1] [2]. Watcher has a core team which openly discuss all issues. Watcher support gate tests which run unit tests/fullstack/tempest and Rally tests. Bugs are managed by OpenStack launchpad [3]. Open Design: All designs and specs are published for review and managed in OpenStack launchpad [3]. Watcher conducts a weekly IRC meeting to discuss all designs and future roadmap [5]. Watcher repository consists of many documentation and diagrams explaining the current design and code and also future roadmap and ideas. Everything is discussed openly on the Watcher IRC channel (which is logged [10]) or the ML. [1] https://review.openstack.org/#/q/project:openstack/watcher [2] https://review.openstack.org/#/q/project:openstack/watcher-specs [3] https://launchpad.net/watcher [4] http://lists.openstack.org/pipermail/openstack-dev/2016-May/093826.html [5] http://eavesdrop.openstack.org/meetings/watcher/ [6] https://etherpad.openstack.org/p/mitaka-watcher-midcycle [7] https://wiki.openstack.org/wiki/Watcher [8] http://factory.b-com.com/www/watcher/doc/watcher/webapi/v1.html [9] http://factory.b-com.com/www/watcher/doc/watcher/dev/devstack.html [10] http://eavesdrop.openstack.org/irclogs/%23openstack-watcher/ Change-Id: Ia948b4532748928134eb2cba3046dae498f4d206
2016-05-25 14:11:05 +02:00
python-watcherclient:
repos:
- openstack/python-watcherclient
watcher-dashboard:
repos:
- openstack/watcher-dashboard
zaqar:
ptl:
name: Hao Wang
irc: wanghao
email: sxmatch1986@gmail.com
appointed:
- train
- victoria
- xena
- zed
irc-channel: openstack-zaqar
service: Message service
mission: >
To produce an OpenStack messaging service that affords a
variety of distributed application patterns in an efficient,
scalable and highly-available manner, and to create and maintain
associated Python libraries and documentation.
url: https://wiki.openstack.org/wiki/Zaqar
deliverables:
python-zaqarclient:
repos:
- openstack/python-zaqarclient
zaqar:
repos:
- openstack/zaqar
zaqar-specs:
release-management: none
repos:
- openstack/zaqar-specs
zaqar-tempest-plugin:
repos:
- openstack/zaqar-tempest-plugin
zaqar-ui:
repos:
- openstack/zaqar-ui
Add project Zun to OpenStack big-tent Addition of the Zun Project - a collaboration of 22 engineers from 12 affiliations [1] to bring prevailing container technologies to Openstack. Zun was originated from Magnum, but became an independent project based on the decision made by the community in the OpenStack Austin design summit [2]. After the split, Zun focuses on providing container service for OpenStack, while Magnum focus on deployment and management of Container Orchestration Engines (COEs). Zun aligns with the OpenStack Mission by providing an OpenStack-native container service backed by various container technologies. Zun directly builds on existing OpenStack services, such as Nova, Neutron, Glance, Keystone, Horizon, and potentially more. Zun uses the Apache v2.0 license. All library dependencies allow for unrestricted distribution and deployment. Our PTL is chosen by our contributors [3], and holds weekly IRC meetings in #openstack-meeting and are logged [4][5]. We are also available in #openstack-zun to for daily discussion and as a way for users to interact with our developers openly. Zun provides a level and open collaboration playing field because it doesn't benefit a specific vendor or contributors from a specific vendor. Zun uses Gerrit for code reviews [6][7][8], and Launchpad [9][10] for bugs and blueprints tracking. Our code is automatically tested by the OpenStack CI infrastructure. Our PTL serves as a contact for cross-project teams in OpenStack. We make extensive use of existing software, and have not deliberately reproduced functionality that already exists in other ecosystem projects. Zun doesn't compete with COEs, such as Swarm, Mesos, or Kubernetes, but instead offers a pluggable architecture for vendors to plugin COEs of their choice. We are open to integrate prevailing container technologies that help OpenStack operators offer application containers to their cloud users. Project direction has been publicly discussed at the Austin and Barcelona OpenStack Design Summit, and in the weekly IRC meetings. We also use the openstack-dev ML for project discussion. Zun is compatible with OpenStack APIs, and uses Keystone middleware to integrate with OpenStack Identity. There are numerous active contributors from several affiliations who have significant contribution to Zun. The Zun team is happy to participate in any goals specified by the TC, and meet any policies that the TC requires all projects to meet. [1] http://stackalytics.com/?project_type=openstack-others&release=all&metric=commits&module=zun [2] https://etherpad.openstack.org/p/newton-magnum-unified-abstraction [3] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107887.html [4] http://eavesdrop.openstack.org/meetings/zun/2016/ [5] http://eavesdrop.openstack.org/meetings/higgins/2016/ [6] https://review.openstack.org/#/q/project:openstack/zun [7] https://review.openstack.org/#/q/project:openstack/python-zunclient [8] https://review.openstack.org/#/q/project:openstack/zun-ui [9] https://launchpad.net/zun [10] https://launchpad.net/zun-ui Change-Id: I9c75496f7b40d3fb78e8c4c92e6bfc2bc161e52f
2016-11-24 16:07:49 -06:00
zun:
ptl:
name: Hongbin Lu
irc: No nick supplied
email: hongbin034@gmail.com
appointed:
- xena
- yoga
- zed
- '2023.1'
Add project Zun to OpenStack big-tent Addition of the Zun Project - a collaboration of 22 engineers from 12 affiliations [1] to bring prevailing container technologies to Openstack. Zun was originated from Magnum, but became an independent project based on the decision made by the community in the OpenStack Austin design summit [2]. After the split, Zun focuses on providing container service for OpenStack, while Magnum focus on deployment and management of Container Orchestration Engines (COEs). Zun aligns with the OpenStack Mission by providing an OpenStack-native container service backed by various container technologies. Zun directly builds on existing OpenStack services, such as Nova, Neutron, Glance, Keystone, Horizon, and potentially more. Zun uses the Apache v2.0 license. All library dependencies allow for unrestricted distribution and deployment. Our PTL is chosen by our contributors [3], and holds weekly IRC meetings in #openstack-meeting and are logged [4][5]. We are also available in #openstack-zun to for daily discussion and as a way for users to interact with our developers openly. Zun provides a level and open collaboration playing field because it doesn't benefit a specific vendor or contributors from a specific vendor. Zun uses Gerrit for code reviews [6][7][8], and Launchpad [9][10] for bugs and blueprints tracking. Our code is automatically tested by the OpenStack CI infrastructure. Our PTL serves as a contact for cross-project teams in OpenStack. We make extensive use of existing software, and have not deliberately reproduced functionality that already exists in other ecosystem projects. Zun doesn't compete with COEs, such as Swarm, Mesos, or Kubernetes, but instead offers a pluggable architecture for vendors to plugin COEs of their choice. We are open to integrate prevailing container technologies that help OpenStack operators offer application containers to their cloud users. Project direction has been publicly discussed at the Austin and Barcelona OpenStack Design Summit, and in the weekly IRC meetings. We also use the openstack-dev ML for project discussion. Zun is compatible with OpenStack APIs, and uses Keystone middleware to integrate with OpenStack Identity. There are numerous active contributors from several affiliations who have significant contribution to Zun. The Zun team is happy to participate in any goals specified by the TC, and meet any policies that the TC requires all projects to meet. [1] http://stackalytics.com/?project_type=openstack-others&release=all&metric=commits&module=zun [2] https://etherpad.openstack.org/p/newton-magnum-unified-abstraction [3] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107887.html [4] http://eavesdrop.openstack.org/meetings/zun/2016/ [5] http://eavesdrop.openstack.org/meetings/higgins/2016/ [6] https://review.openstack.org/#/q/project:openstack/zun [7] https://review.openstack.org/#/q/project:openstack/python-zunclient [8] https://review.openstack.org/#/q/project:openstack/zun-ui [9] https://launchpad.net/zun [10] https://launchpad.net/zun-ui Change-Id: I9c75496f7b40d3fb78e8c4c92e6bfc2bc161e52f
2016-11-24 16:07:49 -06:00
irc-channel: openstack-zun
service: Containers service
mission: >
To provide an OpenStack containers service that integrates with various
container technologies for managing application containers on OpenStack.
url: https://wiki.openstack.org/wiki/Zun
deliverables:
python-zunclient:
repos:
- openstack/python-zunclient
zun:
repos:
- openstack/zun
zun-tempest-plugin:
repos:
- openstack/zun-tempest-plugin
Add project Zun to OpenStack big-tent Addition of the Zun Project - a collaboration of 22 engineers from 12 affiliations [1] to bring prevailing container technologies to Openstack. Zun was originated from Magnum, but became an independent project based on the decision made by the community in the OpenStack Austin design summit [2]. After the split, Zun focuses on providing container service for OpenStack, while Magnum focus on deployment and management of Container Orchestration Engines (COEs). Zun aligns with the OpenStack Mission by providing an OpenStack-native container service backed by various container technologies. Zun directly builds on existing OpenStack services, such as Nova, Neutron, Glance, Keystone, Horizon, and potentially more. Zun uses the Apache v2.0 license. All library dependencies allow for unrestricted distribution and deployment. Our PTL is chosen by our contributors [3], and holds weekly IRC meetings in #openstack-meeting and are logged [4][5]. We are also available in #openstack-zun to for daily discussion and as a way for users to interact with our developers openly. Zun provides a level and open collaboration playing field because it doesn't benefit a specific vendor or contributors from a specific vendor. Zun uses Gerrit for code reviews [6][7][8], and Launchpad [9][10] for bugs and blueprints tracking. Our code is automatically tested by the OpenStack CI infrastructure. Our PTL serves as a contact for cross-project teams in OpenStack. We make extensive use of existing software, and have not deliberately reproduced functionality that already exists in other ecosystem projects. Zun doesn't compete with COEs, such as Swarm, Mesos, or Kubernetes, but instead offers a pluggable architecture for vendors to plugin COEs of their choice. We are open to integrate prevailing container technologies that help OpenStack operators offer application containers to their cloud users. Project direction has been publicly discussed at the Austin and Barcelona OpenStack Design Summit, and in the weekly IRC meetings. We also use the openstack-dev ML for project discussion. Zun is compatible with OpenStack APIs, and uses Keystone middleware to integrate with OpenStack Identity. There are numerous active contributors from several affiliations who have significant contribution to Zun. The Zun team is happy to participate in any goals specified by the TC, and meet any policies that the TC requires all projects to meet. [1] http://stackalytics.com/?project_type=openstack-others&release=all&metric=commits&module=zun [2] https://etherpad.openstack.org/p/newton-magnum-unified-abstraction [3] http://lists.openstack.org/pipermail/openstack-dev/2016-November/107887.html [4] http://eavesdrop.openstack.org/meetings/zun/2016/ [5] http://eavesdrop.openstack.org/meetings/higgins/2016/ [6] https://review.openstack.org/#/q/project:openstack/zun [7] https://review.openstack.org/#/q/project:openstack/python-zunclient [8] https://review.openstack.org/#/q/project:openstack/zun-ui [9] https://launchpad.net/zun [10] https://launchpad.net/zun-ui Change-Id: I9c75496f7b40d3fb78e8c4c92e6bfc2bc161e52f
2016-11-24 16:07:49 -06:00
zun-ui:
repos:
- openstack/zun-ui
kuryr:
repos:
- openstack/kuryr
kuryr-libnetwork:
repos:
- openstack/kuryr-libnetwork