This patch updates governance for new repositories implemented
in the OpenStack-Ansible Project which implement the remaining
OpenStack projects currently supported.
Change-Id: I3f209979e965f21c021a8c4dedbe76270307b9cb
Depends-On: I791d6c7721ed1e7ca2ae8631699fca24a1f917cf
Update 'fuel' project to include new 'fuel-ui' project
to where 'fuel-web' javascript code has been moved.
Change-Id: I54e0040cee3a98273143a819fc1eff126154fa2c
Depends-On: Id6286ba1de308a57d16d47b3d46ce36faecd93d0
In M release, the name of the ceilometer-specs has been changed to
the 'telemetry-specs'. So does the git repo.
The change set: https://review.openstack.org/#/c/248897/
Change-Id: I4c4f4536d1f93ed31e733f1e88c1e36dd1183047
Closes-Bug: #1550119
Following the directive
http://governance.openstack.org/reference/new-projects-requirements.html
OpenStack Mission Alignment:
Dragonflow is a distributed SDN controller for OpenStack Neutron
supporting distributed Switching, Routing, DHCP and more.
Our project mission is to implement advanced networking services in
a manner that is efficient, elegant and simple.
Dragonflow is designed to support large scale deployments with a
focus on latency and performance, as well as providing advanced
innovative services that run locally on each compute node,
with container technology in mind.
Our Mission Statment:
1) Implement Neutron APIs using SDN principles,
while keeping both Plug-in and Implementation fully
under OpenStack project and governance.
2) 100% open source, contributors are welcome to partner
and share a mutual vision.
3) Lightweight and Simple in terms of code size and
complexity, so new users / contributors have a simple
and fast ramp-up.
4) Aim for performance-intensive environments, where
latency is a big deal, while being small and
intuitive enough to run on small ones as well.
Completely pluggable design, easy to extend and enhance.
5) We *truly* believe in a distributed control plane.
Following the 4 opens
Open Source:
Dragonflow is 100% open source, everything from implementation
to design to future features and roadmap are happening and shared
openly and presented in various meetups and documents.
Dragonflow aims to become production grade open source OpenStack
Networking API implementation.
Open Community:
We are working closely with the community to share use cases and problems
and decide on future priorites for the project.
There are already few contributors from different companies which both
contribute code and both plan to deploy Dragonflow.
It is in our mission to collobrate with as many members/companies
in the community as possible and we welcome any feedback and any desire
to help us shape Dragonflow future by others.
Open Development:
- All Dragonflow code is being code reviewed in OpenStack CI
- Dragonflow has a core team which openly discuss all issues
- Dragonflow support gate tests which run unit tests/fullstack/tempest
and Rally tests.
- Dragonflow collaborates with Neutron and put Neutron objects/data model
as first class citizen in our design and implementation
- Dragonflow has no Northbound API other then the OpenStack/Neutron API
- Bugs are managed by OpenStack launchpad
Open Design:
- All designs and specs are published for review and managed in
OpenStack launchpad
- Dragonflow conducts a weekly IRC meeting to discuss all designs
and future roadmap
- Dragonflow repository consists of many Documentation and diagrams
explaining the current design and code and also future roadmap
and ideas.
- Everything is discussed openly either on the review board/
Dragonflow IRC channel (which is logged) or the ML.
- Dragonflow mission statement is to become an integral part of OpenStack
Change-Id: I2e36c75baabef2a9e78e1c466abc5a1597b66ca4
This updates governance for a new repo that will be moved from the
Fuel-Main project as a subproject repo.
Change-Id: I3e9cc2277d95b41d695953457460f8c8a16834bd
Depends-On: Ia4e98cdd20ef5c347405e9f2b3a805bfa1c23761
Create a new tag describing which deliverables are properly
following the stable branch policy. This tag will be maintained
by the newly-formed Stable Branch Maintenance Team.
Change-Id: Ib810b4915bb77a1f2d92bae0aec7009140adcea6
Those people are active translators from 2015-08-01 to 2016-01-30.
The name list was made by querying Zanata translators statistic API.
It's necessary to grant those people ATC in order to make them voters
for PTL and give them the discount codes for Austin summit.
Change-Id: Ib270b1ef09892314ef449e0cf7b288e7f2d974ef
This patch updates governance for a new repository implemented
in the OpenStack-Ansible Project which implements Bare Metal
Deployment (Ironic) infrastructure as part of an OpenStack
environment.
Depends-On: I590f5ade90b3e37af7f1b8ee333000d4f993f8c5
Change-Id: Ide66c7ee59192ac441ac2919028eca0ad665ceea
This patch is a follow-up to our previous resolution suggesting an
update to the OpenStack mission statement. The OpenStack Foundation
Board of Directors had feedback and requested it be discussed further on
the foundation mailing list. This version reflects the outcome of that
discussion.
Change-Id: I16a649872414099b394ec1543fb6e73e7de72151
Following the directive
http://governance.openstack.org/reference/new-projects-requirements.html
OpenStack Mission Alignment:
OpenStack and Neutron are no longer the new kid in the block, Neutron
has matured and its popularity in OpenStack deployments over
nova-network is increasing, it has a very rich ecosystem of plugins
and drivers which provide networking solutions and services (like
LBaaS, VPNaaS and FWaaS). All of which implement the Neutron
abstraction and intend to be interchangeable for cloud deployers.
What we noticed in regards to container networking, and specifically
in environments that are mixed for containers, virtual machines, and
bare metal instances is that every networking solution for virtual
machines and bare metal instances tries to reinvent and enable
networking for containers but this time using the Docker API (or any
other abstraction). OpenStack Magnum, for example, has to introduce an
abstraction layer for different libnetwork drivers depending on the
Container Orchestration Engine used. It would be ideal if Kuryr could
be the default for Magnum COEs
The idea behind Kuryr is to be able to leverage the abstraction and
all the hard work that was put in Neutron and its plugins and services
and use that to provide production grade networking for containers use
cases. Instead of each independent Neutron plugin or solution trying
to find and close the gaps, we can concentrate the efforts and focus
in one spot - Kuryr.
Kuryr aims to be the “integration bridge” between several communities,
Containers communities (Docker / Kubernetes / Mesos) and Neutron and
propose and drive changes needed in Neutron (or in containers
frameworks) to be able to fulfill the use cases needed specifically in
container networking.
It is important to note that Kuryr is NOT a networking solution by
itself nor does it attempt to become one. The Kuryr effort is focused
to be the courier that delivers Neutron networking and services to
Docker / Kubernetes / Mesos.
Following the 4 opens
Open Source:
Kuryr is 100% open source, everything from implementation
to design to future features and roadmap are happening and shared
openly and presented in various meetups and documents.
Open Community:
Kuryr work flows and stats already speak for them selfs on this
front, Kuryr has a versatile community and is solving
use cases for many different companies while discussing, reviewing
and implementing them together
Open Development:
- All Kuryr code is being code reviewed in OpenStack CI
- Kuryr has a core team which openly discuss all issues and is made
up with people from different companies
- Kuryr support gate tests which run unit tests/fullstack/tempest
and Rally tests.
- Kuryr collaborates with Neutron, Magnum and Kolla and all the
various Neutron backends and Neutron advance services
Open Design:
- All designs and specs are published for review and managed in
OpenStack launchpad
- Kuryr conducts a weekly IRC meeting to discuss all designs
and future roadmap.
The IRC meeting is on alternating times to fit all different
time zones
- Kuryr repository consists of many Documentation and diagrams
explaining the current design and code and also future roadmap
and ideas.
- Everything is discussed openly either on the review board/
Kuryr IRC channel (which is logged) or the ML.
Change-Id: I81d4677f9d0081d1b967305956c1bec85cb765ea
Prompted by If92ab9d473f8ea8c861584dfc6d3e6a9ff7fdb6a, this change
clarifies that it's okay to use any FOSS program as a test
requirement for an OpenStack project.
Change-Id: I77d862b4c867912cb66f100839c875b7063b6b4b
This is like the api-wg, but a dedicated repository for doing the
service types authority.
Depends-On: I75f7066dca7b26204fa7a0196fd019c1b33aa3a3
Change-Id: I1a28446fc3279b69f05f8089d30b05b6e8d14801
Tacker is an OpenStack ecosystem project with a mission to build NFV
Orchestration services for OpenStack. Tacker intends to support
life-cycle management of Network Services and Virtual Network Functions
(VNF). This service is designed to be compatible with the ETSI NFV
Architecture Framework [1].
NFV Orchestration consists of two major components - VNF Manager and NFV
Orchestrator. This project envisions to build both of these components
under the OpenStack platform. The VNF Manager (VNFM) handles the life-cycle
management of VNFs. It would instantiate VNFs on an OpenStack VIM and
facilitate configuration, monitoring, healing and scaling of the VMs
backing the VNF. Tacker plans to use many existing OpenStack services
to realize these features. NFV Orchestrator (NFVO) provides end-to-end
Network Service Orchestration. NFVO would in turn use VNFM component to
stand up the VNFs composed in a Network Service. NFVO will also render
VNF Forwarding Graphs (VNFFG) using a Service Function Chaining (SFC)
API across the instantiated VNFs. Tacker intends to use Neutron's
networking-sfc API [2] for this purpose.
NFV work flows are typically described using templates. The current
template schema of choice is TOSCA which is based on OASIS Standard [3].
At this moment TOSCA has the leading mindshare across VNF vendors and
operators. Tacker project is working closely with OASIS TOSCA NFV
standards group to shape the evolution of its Simple Profile for NFV [4].
The project is also leveraging and contributing to the Heat Translator's
tosca-parser work [5]. Beyond TOSCA other templating schemes can be
introduced in the future.
Tacker started as a ServiceVM project under Neutron during the Atlanta
Summit(2014). The evolution was slow and the interest totally dropped
after the neutron vendor plugin decomposition. However the project had
code assets for a template based lifecycle management of ServiceVMs.
The remaining Tacker team members met in early 2015, brainstormed and
eventually decided to pivot to ETSI NFV Orchestration use case. ETSI NFV
envisions an Information Model based Orchestration and Life Cycle
management of Virtual Network Functions. Since then Tacker developer
community grew over the last three cycles and now getting contributions
from a diverse set of participants [6] [7]. Tacker is also now actively
collaborating with downstream OPNFV Projects like SFC [8], Parser [9] and
Multisite [10]. Tacker functionality has been demonstrated in both
OpenStack Vancouver and Tokyo Summits and in the recent OPNFV Summit.
Tacker project strictly follows the Four Opens [11] suggested by OpenStack
Foundation. Tacker code has been developed under an Apache 2.0 license.
All code is submitted and reviewed through OpenStack Gerrit. The project
maintains a core team that approves all changes. Bugs are filed, reviewed
and tracked in Launchpad [12]. The project obeys the coordinated project
interfaces, including tox, pbr, global-requirements, etc. Tacker gate now
runs pep8, py27, docs tasks and a dsvm functional test.
In summary, before Tacker, operators are expected to string together custom
solutions using Heat, Ceilometer, etc. to achieve similar functionality.
Tacker reduces such duplicate and complex effort in the industry by bringing
together a community of NFV operators and VNF vendors to collaborate and
build out a template based workflow engine and a higher level OpenStack
"NFV Orchestration" API.
[1] https://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_nfv002v010201p.pdf
[2] http://docs.openstack.org/developer/networking-sfc/
[3] https://www.oasis-open.org/committees/tosca/
[4] http://docs.oasis-open.org/tosca/tosca-nfv/v1.0/tosca-nfv-v1.0.html
[5] https://github.com/openstack/tosca-parser
[6] http://stackalytics.openstack.org/?project_type=openstack-others&module=tacker&metric=patches
[7] http://stackalytics.com/report/contribution/tacker/90
[8] https://wiki.opnfv.org/service_function_chaining
[9] https://wiki.opnfv.org/parser
[10] https://wiki.opnfv.org/multisite
[11] https://github.com/openstack/governance/blob/master/reference/opens.rst
[12] https://launchpad.net/tacker
[13] http://git.openstack.org/cgit/openstack/tacker
[14] http://git.openstack.org/cgit/openstack/python-tackerclient
[15] http://git.openstack.org/cgit/openstack/tacker-horizon
[16] http://git.openstack.org/cgit/openstack/tacker-specs
Change-Id: Idd7e9e8e4d428f9de28d8527e2f459b8ab8b8288
python-fuelclient is currently tagged with a
release:cycle-with-intermediary model, but everything
else in Fuel is using a release:independent model and
python-fuelclient doesn't seem to follow cycles anyway.
This will need Fuel PTL's +1.
Change-Id: Ia6a6f6d122b4d0cc57d24b47bea7ad7ff5a2af38
The library will be used by the AMT and DRAC drivers in Ironic. They are
using the WSMan protocol for communication. Both used openwsman
previously but the DRAC driver recently moved to its own client
implementation, which is also in the interest of the AMT driver because
of the limitations of openwsman.
Change-Id: If40283a24a6656808e43e5b010222f5b5d80da7e
Depends-On: I431f1607f01c0fedda1ab91b602ed440b2f7508a