Merge "Retire cloud-init"

This commit is contained in:
Zuul 2019-05-02 17:09:01 +00:00 committed by Gerrit Code Review
commit 09c3dd9aec
113 changed files with 9 additions and 8666 deletions

8
.gitignore vendored
View File

@ -1,8 +0,0 @@
.tox/
cloudinit.egg-info/
*.pyc
doc/build
doc/source/api/
ChangeLog
AUTHORS
cover/

View File

@ -1,47 +0,0 @@
=====================
Hacking on cloud-init
=====================
To get changes into cloud-init, the process to follow is:
* Fork from github, create a branch and make your changes
- ``git clone https://github.com/openstack/cloud-init``
- ``cd cloud-init``
- ``echo hack``
* Check test and code formatting / lint and address any issues:
- ``tox``
* Commit / ammend your changes (before review, make good commit messages with
one line summary followed by empty line followed by expanded comments).
- ``git commit``
* Push to http://review.openstack.org:
- ``git-review``
* Before your changes can be accepted, you must sign the `Canonical
Contributors License Agreement`_. Use 'Scott Moser' as the 'Project
contact'. To check to see if you've done this before, look for your
name in the `Canonical Contributor Agreement Team`_ on Launchpad.
Then be patient and wait (or ping someone on cloud-init team).
* `Core reviewers/maintainers`_
Remember the more you are involved in the project the more beneficial it is
for everyone involved (including yourself).
**Contacting us:**
Feel free to ping the folks listed above and/or join ``#cloud-init`` on
`freenode`_ (`IRC`_) if you have any questions.
.. _Core reviewers/maintainers: https://review.openstack.org/#/admin/groups/665,members
.. _IRC: irc://chat.freenode.net/cloud-init
.. _freenode: http://freenode.net/
.. _Canonical Contributors License Agreement: http://www.ubuntu.com/legal/contributors
.. _Canonical Contributor Agreement Team: https://launchpad.net/~contributor-agreement-canonical/+members#active

22
LICENSE
View File

@ -1,22 +0,0 @@
Copyright 2015 Canonical Ltd.
This program is free software: you can redistribute it and/or modify it under
the terms of the GNU General Public License version 3, as published by the
Free Software Foundation.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranties of MERCHANTABILITY,
SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License along with
this program. If not, see <http://www.gnu.org/licenses/>
Alternatively, this program may be used under the terms of the Apache License,
Version 2.0, in which case the provisions of that license are applicable
instead of those above. If you wish to allow use of your version of this
program under the terms of the Apache License, Version 2.0 only, indicate
your decision by deleting the provisions above and replace them with the notice
and other provisions required by the Apache License, Version 2.0. If you do not
delete the provisions above, a recipient may use your version of this file
under the terms of either the GPLv3 or the Apache License, Version 2.0.

View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,674 +0,0 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

View File

@ -1,8 +0,0 @@
include AUTHORS
include ChangeLog
include README.rst
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

9
README Normal file
View File

@ -0,0 +1,9 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org.

View File

@ -1,50 +0,0 @@
Cloud-init
==========
*Cloud-init initializes systems for cloud environments.*
Join us
-------
- http://launchpad.net/cloud-init
Bugs
----
Bug reports should be opened at
https://bugs.launchpad.net/cloud-init/+filebug
On Ubuntu Systems, you can file bugs with:
::
$ ubuntu-bug cloud-init
Testing and requirements
------------------------
Requirements
~~~~~~~~~~~~
TBD
Tox.ini
~~~~~~~
Our ``tox.ini`` file describes several test environments that allow to test
cloud-init with different python versions and sets of requirements installed.
Please refer to the `tox`_ documentation to understand how to make these test
environments work for you.
Developer documentation
-----------------------
We also have sphinx documentation in ``docs/source``.
*To build it, run:*
::
$ python setup.py build_sphinx
.. _tox: http://tox.testrun.org/

View File

@ -1,4 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab

View File

@ -1,4 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab

View File

@ -1,8 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
class CloudInitError(Exception):
pass

View File

@ -1,113 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
from __future__ import absolute_import
import logging
import sys
_BASE = __name__.split(".", 1)[0]
# Add a BLATHER level, this matches the multiprocessing utils.py module (and
# kazoo and others) that declares a similar level, this level is for
# information that is even lower level than regular DEBUG and gives out so
# much runtime information that it is only useful by low-level/certain users...
BLATHER = 5
# Copy over *select* attributes to make it easy to use this module.
CRITICAL = logging.CRITICAL
DEBUG = logging.DEBUG
ERROR = logging.ERROR
FATAL = logging.FATAL
INFO = logging.INFO
NOTSET = logging.NOTSET
WARN = logging.WARN
WARNING = logging.WARNING
class _BlatherLoggerAdapter(logging.LoggerAdapter):
def blather(self, msg, *args, **kwargs):
"""Delegate a blather call to the underlying logger."""
self.log(BLATHER, msg, *args, **kwargs)
def warn(self, msg, *args, **kwargs):
"""Delegate a warning call to the underlying logger."""
self.warning(msg, *args, **kwargs)
# TODO(harlowja): we should remove when we no longer have to support 2.6...
if sys.version_info[0:2] == (2, 6): # pragma: nocover
from logutils.dictconfig import dictConfig
class _FixedBlatherLoggerAdapter(_BlatherLoggerAdapter):
"""Ensures isEnabledFor() exists on adapters that are created."""
def isEnabledFor(self, level):
return self.logger.isEnabledFor(level)
_BlatherLoggerAdapter = _FixedBlatherLoggerAdapter
# Taken from python2.7 (same in python3.4)...
class _NullHandler(logging.Handler):
"""This handler does nothing.
It's intended to be used to avoid the
"No handlers could be found for logger XXX" one-off warning. This is
important for library code, which may contain code to log events. If a
user of the library does not configure logging, the one-off warning
might be produced; to avoid this, the library developer simply needs
to instantiate a _NullHandler and add it to the top-level logger of the
library module or package.
"""
def handle(self, record):
"""Stub."""
def emit(self, record):
"""Stub."""
def createLock(self):
self.lock = None
else:
from logging.config import dictConfig
_NullHandler = logging.NullHandler
def getLogger(name=_BASE, extra=None):
logger = logging.getLogger(name)
if not logger.handlers:
logger.addHandler(_NullHandler())
return _BlatherLoggerAdapter(logger, extra=extra)
def configure_logging(log_to_console=False):
logging_config = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s',
},
},
'handlers': {
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'standard',
},
},
'loggers': {
'': {
'handlers': [],
'level': 'DEBUG',
'propagate': True,
},
},
}
if log_to_console:
logging_config['loggers']['']['handlers'].append('console')
dictConfig(logging_config)

View File

@ -1,4 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab

View File

@ -1,77 +0,0 @@
# Copyright (C) 2015 Canonical Ltd.
# Copyright 2015 Cloudbase Solutions Srl
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import importlib
import platform
import six
__all__ = (
'get_osutils',
'OSUtils',
)
def get_osutils():
"""Obtain the OS utils object for the underlying platform."""
name, _, _ = platform.linux_distribution()
if not name:
name = platform.system()
name = name.lower()
location = "cloudinit.osys.{0}.base".format(name)
module = importlib.import_module(location)
return module.OSUtils
@six.add_metaclass(abc.ABCMeta)
class OSUtils(object):
"""Base class for an OS utils namespace.
This base class provides a couple of hooks which needs to be
implemented by subclasses, for each particular OS and distro.
"""
name = None
@abc.abstractproperty
def network(self):
"""Get the network object for the underlying platform."""
@abc.abstractproperty
def filesystem(self):
"""Get the filesystem object for the underlying platform."""
@abc.abstractproperty
def users(self):
"""Get the users object for the underlying platform."""
@abc.abstractproperty
def general(self):
"""Get the general object for the underlying platform."""
@abc.abstractproperty
def user_class(self):
"""Get the user class specific to this operating system."""
@abc.abstractproperty
def route_class(self):
"""Get the route class specific to this operating system."""
@abc.abstractproperty
def interface_class(self):
"""Get the interface class specific to this operating system."""

View File

@ -1,43 +0,0 @@
# Copyright (C) 2015 Canonical Ltd.
# Copyright 2015 Cloudbase Solutions Srl
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
@six.add_metaclass(abc.ABCMeta)
class General(object):
"""Base class for the general namespace.
This class should contain common functions between all OSes,
which can't be grouped in a domain-specific namespace.
"""
@abc.abstractmethod
def set_timezone(self, timezone):
"""Change the timezone for the underlying platform.
The `timezone` parameter should be a TZID timezone format,
e.g. 'Africa/Mogadishu'
"""
@abc.abstractmethod
def set_locale(self, locale):
"""Change the locale for the underlying platform."""
@abc.abstractmethod
def reboot(self):
pass

View File

@ -1,156 +0,0 @@
# Copyright (C) 2015 Canonical Ltd.
# Copyright 2015 Cloudbase Solutions Srl
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
from cloudinit import util
__all__ = (
'Network',
'Route',
'Interface',
)
@six.add_metaclass(abc.ABCMeta)
class Network(object):
"""Base network class for network related utilities."""
@abc.abstractmethod
def routes(self):
"""Get the list of the available routes."""
@abc.abstractmethod
def default_gateway(self):
"""Get the default gateway, as a route object."""
@abc.abstractmethod
def interfaces(self):
"""Get the list of the available interfaces."""
@abc.abstractmethod
def hosts(self):
"""Get the list of the available hosts."""
@abc.abstractmethod
def set_hostname(self, hostname):
"""Change the host name of the instance."""
@abc.abstractmethod
def set_static_network_config(self, adapter_name, address, netmask,
broadcast, gateway, dnsnameservers):
"""Configure a new static network."""
@six.add_metaclass(abc.ABCMeta)
class Route(object):
"""Base class for routes."""
def __init__(self, destination, gateway, netmask,
interface, metric,
flags=None, refs=None, use=None, expire=None):
self.destination = destination
self.gateway = gateway
self.netmask = netmask
self.interface = interface
self.metric = metric
self.flags = flags
self.refs = refs
self.use = use
self.expire = expire
def __repr__(self):
return ("Route(destination={!r}, gateway={!r}, netmask={!r})"
.format(self.destination, self.gateway, self.netmask))
@abc.abstractproperty
def is_static(self):
"""Check if this route is static."""
@util.abstractclassmethod
def add(cls, route):
"""Add a new route in the underlying OS.
The `route` parameter should be an instance of :class:`Route`.
"""
@util.abstractclassmethod
def delete(cls, route):
"""Delete a route from the underlying OS.
The `route` parameter should be an instance of :class:`Route`.
"""
@six.add_metaclass(abc.ABCMeta)
class Interface(object):
"""Base class reprensenting an interface.
It provides both attributes for retrieving interface information,
as well as methods for modifying the state of a route, such
as activating or deactivating it.
"""
def __init__(self, name, mac, index=None, mtu=None,
dhcp_server=None, dhcp_enabled=None):
self._mtu = mtu
self.name = name
self.index = index
self.mac = mac
self.dhcp_server = dhcp_server
self.dhcp_enabled = dhcp_enabled
def __eq__(self, other):
return (self.mac == other.mac and
self.name == other.name and
self.index == other.index)
@abc.abstractmethod
def _change_mtu(self, value):
"""Change the mtu for the underlying interface."""
@util.abstractclassmethod
def from_name(cls, name):
"""Get an instance of :class:`Interface` from an interface name.
E.g. this should retrieve the 'eth0' interface::
>>> Interface.from_name('eth0')
"""
@abc.abstractmethod
def up(self):
"""Activate the current interface."""
@abc.abstractmethod
def down(self):
"""Deactivate the current interface."""
@abc.abstractmethod
def is_up(self):
"""Check if this interface is activated."""
@property
def mtu(self):
return self._mtu
@mtu.setter
def mtu(self, value):
self._change_mtu(value)
self._mtu = value

View File

@ -1,67 +0,0 @@
# Copyright (C) 2015 Canonical Ltd.
# Copyright 2015 Cloudbase Solutions Srl
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
from cloudinit import util
@six.add_metaclass(abc.ABCMeta)
class Users(object):
"""Base class for user related operations."""
@abc.abstractmethod
def groups(self):
"""Get a list of the groups available in the system."""
@abc.abstractmethod
def users(self):
"""Get a list of the users available in the system."""
@six.add_metaclass(abc.ABCMeta)
class Group(object):
"""Base class for user groups."""
@util.abstractclassmethod
def create(cls, group_name):
"""Create a new group with the given name."""
@abc.abstractmethod
def add(self, member):
"""Add a new member to this group."""
@six.add_metaclass(abc.ABCMeta)
class User(object):
"""Base class for an user."""
@classmethod
def create(cls, username, password, **kwargs):
"""Create a new user."""
@abc.abstractmethod
def home(self):
"""Get the user's home directory."""
@abc.abstractmethod
def ssh_keys(self):
"""Get the ssh keys for this user."""
@abc.abstractmethod
def change_password(self, password):
"""Change the password for this user."""

View File

@ -1,26 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
from cloudinit.osys import base
from cloudinit.osys.windows import general as general_module
from cloudinit.osys.windows import network as network_module
__all__ = ('OSUtils', )
class OSUtils(base.OSUtils):
"""The OS utils namespace for the Windows platform."""
name = "windows"
network = network_module.Network()
general = general_module.General()
route_class = network_module.Route
# These aren't yet implemented, use `None` for them
# so that we could instantiate the class.
filesystem = user_class = users = None
interface_class = None

View File

@ -1,59 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
"""General utilities for Windows platform."""
import ctypes
from cloudinit import exceptions
from cloudinit.osys import general
from cloudinit.osys.windows.util import kernel32
class General(general.General):
"""General utilities namespace for Windows."""
@staticmethod
def check_os_version(major, minor, build=0):
"""Check if this OS version is equal or higher than (major, minor)"""
version_info = kernel32.Win32_OSVERSIONINFOEX_W()
version_info.dwOSVersionInfoSize = ctypes.sizeof(
kernel32.Win32_OSVERSIONINFOEX_W)
version_info.dwMajorVersion = major
version_info.dwMinorVersion = minor
version_info.dwBuildNumber = build
mask = 0
for type_mask in [kernel32.VER_MAJORVERSION,
kernel32.VER_MINORVERSION,
kernel32.VER_BUILDNUMBER]:
mask = kernel32.VerSetConditionMask(mask, type_mask,
kernel32.VER_GREATER_EQUAL)
type_mask = (kernel32.VER_MAJORVERSION |
kernel32.VER_MINORVERSION |
kernel32.VER_BUILDNUMBER)
ret_val = kernel32.VerifyVersionInfoW(ctypes.byref(version_info),
type_mask, mask)
if ret_val:
return True
else:
err = kernel32.GetLastError()
if err == kernel32.ERROR_OLD_WIN_VERSION:
return False
else:
raise exceptions.CloudInitError(
"VerifyVersionInfo failed with error: %s" % err)
def reboot(self):
raise NotImplementedError
def set_locale(self, locale):
raise NotImplementedError
def set_timezone(self, timezone):
raise NotImplementedError

View File

@ -1,209 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
"""Network utilities for Windows."""
import contextlib
import ctypes
from ctypes import wintypes
import logging
import subprocess
from six.moves import urllib_parse
from cloudinit import exceptions
from cloudinit.osys import base
from cloudinit.osys import network
from cloudinit.osys.windows.util import iphlpapi
from cloudinit.osys.windows.util import kernel32
from cloudinit.osys.windows.util import ws2_32
from cloudinit import url_helper
MIB_IPPROTO_NETMGMT = 3
_FW_IP_PROTOCOL_TCP = 6
_FW_IP_PROTOCOL_UDP = 17
_FW_SCOPE_ALL = 0
_PROTOCOL_TCP = "TCP"
_PROTOCOL_UDP = "UDP"
_ERROR_FILE_NOT_FOUND = 2
_ComputerNamePhysicalDnsHostname = 5
_MAX_URL_CHECK_RETRIES = 3
LOG = logging.getLogger(__name__)
def _heap_alloc(heap, size):
table_mem = kernel32.HeapAlloc(heap, 0, ctypes.c_size_t(size.value))
if not table_mem:
raise exceptions.CloudInitError(
'Unable to allocate memory for the IP forward table')
return table_mem
def _check_url(url, retries_count=_MAX_URL_CHECK_RETRIES):
LOG.debug("Testing url: %s", url)
try:
url_helper.read_url(url, retries=retries_count)
return True
except url_helper.UrlError:
return False
class Network(network.Network):
"""Network namespace object tailored for the Windows platform."""
@staticmethod
@contextlib.contextmanager
def _get_forward_table():
heap = kernel32.GetProcessHeap()
forward_table_size = ctypes.sizeof(iphlpapi.Win32_MIB_IPFORWARDTABLE)
size = wintypes.ULONG(forward_table_size)
table_mem = _heap_alloc(heap, size)
p_forward_table = ctypes.cast(
table_mem, ctypes.POINTER(iphlpapi.Win32_MIB_IPFORWARDTABLE))
try:
err = iphlpapi.GetIpForwardTable(p_forward_table,
ctypes.byref(size), 0)
if err == iphlpapi.ERROR_INSUFFICIENT_BUFFER:
kernel32.HeapFree(heap, 0, p_forward_table)
table_mem = _heap_alloc(heap, size)
p_forward_table = ctypes.cast(
table_mem,
ctypes.POINTER(iphlpapi.Win32_MIB_IPFORWARDTABLE))
err = iphlpapi.GetIpForwardTable(p_forward_table,
ctypes.byref(size), 0)
if err and err != kernel32.ERROR_NO_DATA:
raise exceptions.CloudInitError(
'Unable to get IP forward table. Error: %s' % err)
yield p_forward_table
finally:
kernel32.HeapFree(heap, 0, p_forward_table)
def routes(self):
"""Get a collection of the available routes."""
routing_table = []
with self._get_forward_table() as p_forward_table:
forward_table = p_forward_table.contents
table = ctypes.cast(
ctypes.addressof(forward_table.table),
ctypes.POINTER(iphlpapi.Win32_MIB_IPFORWARDROW *
forward_table.dwNumEntries)).contents
for row in table:
destination = ws2_32.Ws2_32.inet_ntoa(
row.dwForwardDest).decode()
netmask = ws2_32.Ws2_32.inet_ntoa(
row.dwForwardMask).decode()
gateway = ws2_32.Ws2_32.inet_ntoa(
row.dwForwardNextHop).decode()
index = row.dwForwardIfIndex
flags = row.dwForwardProto
metric = row.dwForwardMetric1
route = Route(destination=destination,
gateway=gateway,
netmask=netmask,
interface=index,
metric=metric,
flags=flags)
routing_table.append(route)
return routing_table
def default_gateway(self):
"""Get the default gateway.
This will actually return a :class:`Route` instance. The gateway
can be accessed with the :attr:`gateway` attribute.
"""
return next((r for r in self.routes() if r.destination == '0.0.0.0'),
None)
def set_metadata_ip_route(self, metadata_url):
"""Set a network route if the given metadata url can't be accessed.
This is a workaround for
https://bugs.launchpad.net/quantum/+bug/1174657.
"""
osutils = base.get_osutils()
if osutils.general.check_os_version(6, 0):
# 169.254.x.x addresses are not getting routed starting from
# Windows Vista / 2008
metadata_netloc = urllib_parse.urlparse(metadata_url).netloc
metadata_host = metadata_netloc.split(':')[0]
if not metadata_host.startswith("169.254."):
return
routes = self.routes()
exists_route = any(route.destination == metadata_host
for route in routes)
if not exists_route and not _check_url(metadata_url):
default_gateway = self.default_gateway()
if default_gateway:
try:
LOG.debug('Setting gateway for host: %s',
metadata_host)
route = Route(
destination=metadata_host,
netmask="255.255.255.255",
gateway=default_gateway.gateway,
interface=None, metric=None)
Route.add(route)
except Exception as ex:
# Ignore it
LOG.exception(ex)
# These are not required by the Windows version for now,
# but we provide them as noop version.
def hosts(self):
"""Grab the content of the hosts file."""
raise NotImplementedError
def interfaces(self):
raise NotImplementedError
def set_hostname(self, hostname):
raise NotImplementedError
def set_static_network_config(self, adapter_name, address, netmask,
broadcast, gateway, dnsnameservers):
raise NotImplementedError
class Route(network.Route):
"""Windows route class."""
@property
def is_static(self):
return self.flags == MIB_IPPROTO_NETMGMT
@classmethod
def add(cls, route):
"""Add a new route in the underlying OS.
The function should expect an instance of :class:`Route`.
"""
args = ['ROUTE', 'ADD',
route.destination,
'MASK', route.netmask, route.gateway]
popen = subprocess.Popen(args, shell=False,
stderr=subprocess.PIPE)
_, stderr = popen.communicate()
if popen.returncode or stderr:
# Cannot use the return value to determine the outcome
raise exceptions.CloudInitError('Unable to add route: %s' % stderr)
@classmethod
def delete(cls, _):
"""Delete a route from the underlying OS.
This function should expect an instance of :class:`Route`.
"""
raise NotImplementedError

View File

@ -1,210 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import ctypes
from ctypes import windll
from ctypes import wintypes
from cloudinit.osys.windows.util import kernel32
from cloudinit.osys.windows.util import ws2_32
ERROR_INSUFFICIENT_BUFFER = 122
MAX_ADAPTER_NAME_LENGTH = 256
MAX_ADAPTER_DESCRIPTION_LENGTH = 128
MAX_ADAPTER_ADDRESS_LENGTH = 8
# Do not return IPv6 anycast addresses.
GAA_FLAG_SKIP_ANYCAST = 2
GAA_FLAG_SKIP_MULTICAST = 4
IP_ADAPTER_DHCP_ENABLED = 4
IP_ADAPTER_IPV4_ENABLED = 0x80
IP_ADAPTER_IPV6_ENABLED = 0x0100
MAX_DHCPV6_DUID_LENGTH = 130
IF_TYPE_ETHERNET_CSMACD = 6
IF_TYPE_SOFTWARE_LOOPBACK = 24
IF_TYPE_IEEE80211 = 71
IF_TYPE_TUNNEL = 131
IP_ADAPTER_ADDRESSES_SIZE_2003 = 144
class SOCKET_ADDRESS(ctypes.Structure):
_fields_ = [
('lpSockaddr', ctypes.POINTER(ws2_32.SOCKADDR)),
('iSockaddrLength', wintypes.INT),
]
class IP_ADAPTER_ADDRESSES_Struct1(ctypes.Structure):
_fields_ = [
('Length', wintypes.ULONG),
('IfIndex', wintypes.DWORD),
]
class IP_ADAPTER_ADDRESSES_Union1(ctypes.Union):
_fields_ = [
('Alignment', wintypes.ULARGE_INTEGER),
('Struct1', IP_ADAPTER_ADDRESSES_Struct1),
]
class IP_ADAPTER_UNICAST_ADDRESS(ctypes.Structure):
_fields_ = [
('Union1', IP_ADAPTER_ADDRESSES_Union1),
('Next', wintypes.LPVOID),
('Address', SOCKET_ADDRESS),
('PrefixOrigin', wintypes.DWORD),
('SuffixOrigin', wintypes.DWORD),
('DadState', wintypes.DWORD),
('ValidLifetime', wintypes.ULONG),
('PreferredLifetime', wintypes.ULONG),
('LeaseLifetime', wintypes.ULONG),
]
class IP_ADAPTER_DNS_SERVER_ADDRESS_Struct1(ctypes.Structure):
_fields_ = [
('Length', wintypes.ULONG),
('Reserved', wintypes.DWORD),
]
class IP_ADAPTER_DNS_SERVER_ADDRESS_Union1(ctypes.Union):
_fields_ = [
('Alignment', wintypes.ULARGE_INTEGER),
('Struct1', IP_ADAPTER_DNS_SERVER_ADDRESS_Struct1),
]
class IP_ADAPTER_DNS_SERVER_ADDRESS(ctypes.Structure):
_fields_ = [
('Union1', IP_ADAPTER_DNS_SERVER_ADDRESS_Union1),
('Next', wintypes.LPVOID),
('Address', SOCKET_ADDRESS),
]
class IP_ADAPTER_PREFIX_Struct1(ctypes.Structure):
_fields_ = [
('Length', wintypes.ULONG),
('Flags', wintypes.DWORD),
]
class IP_ADAPTER_PREFIX_Union1(ctypes.Union):
_fields_ = [
('Alignment', wintypes.ULARGE_INTEGER),
('Struct1', IP_ADAPTER_PREFIX_Struct1),
]
class IP_ADAPTER_PREFIX(ctypes.Structure):
_fields_ = [
('Union1', IP_ADAPTER_PREFIX_Union1),
('Next', wintypes.LPVOID),
('Address', SOCKET_ADDRESS),
('PrefixLength', wintypes.ULONG),
]
class NET_LUID_LH(ctypes.Union):
_fields_ = [
('Value', wintypes.ULARGE_INTEGER),
('Info', wintypes.ULARGE_INTEGER),
]
class IP_ADAPTER_ADDRESSES(ctypes.Structure):
_fields_ = [
('Union1', IP_ADAPTER_ADDRESSES_Union1),
('Next', wintypes.LPVOID),
('AdapterName', ctypes.c_char_p),
('FirstUnicastAddress',
ctypes.POINTER(IP_ADAPTER_UNICAST_ADDRESS)),
('FirstAnycastAddress',
ctypes.POINTER(IP_ADAPTER_DNS_SERVER_ADDRESS)),
('FirstMulticastAddress',
ctypes.POINTER(IP_ADAPTER_DNS_SERVER_ADDRESS)),
('FirstDnsServerAddress',
ctypes.POINTER(IP_ADAPTER_DNS_SERVER_ADDRESS)),
('DnsSuffix', wintypes.LPWSTR),
('Description', wintypes.LPWSTR),
('FriendlyName', wintypes.LPWSTR),
('PhysicalAddress', ctypes.c_ubyte * MAX_ADAPTER_ADDRESS_LENGTH),
('PhysicalAddressLength', wintypes.DWORD),
('Flags', wintypes.DWORD),
('Mtu', wintypes.DWORD),
('IfType', wintypes.DWORD),
('OperStatus', wintypes.DWORD),
('Ipv6IfIndex', wintypes.DWORD),
('ZoneIndices', wintypes.DWORD * 16),
('FirstPrefix', ctypes.POINTER(IP_ADAPTER_PREFIX)),
# kernel >= 6.0
('TransmitLinkSpeed', wintypes.ULARGE_INTEGER),
('ReceiveLinkSpeed', wintypes.ULARGE_INTEGER),
('FirstWinsServerAddress',
ctypes.POINTER(IP_ADAPTER_DNS_SERVER_ADDRESS)),
('FirstGatewayAddress',
ctypes.POINTER(IP_ADAPTER_DNS_SERVER_ADDRESS)),
('Ipv4Metric', wintypes.ULONG),
('Ipv6Metric', wintypes.ULONG),
('Luid', NET_LUID_LH),
('Dhcpv4Server', SOCKET_ADDRESS),
('CompartmentId', wintypes.DWORD),
('NetworkGuid', kernel32.GUID),
('ConnectionType', wintypes.DWORD),
('TunnelType', wintypes.DWORD),
('Dhcpv6Server', SOCKET_ADDRESS),
('Dhcpv6ClientDuid', ctypes.c_ubyte * MAX_DHCPV6_DUID_LENGTH),
('Dhcpv6ClientDuidLength', wintypes.ULONG),
('Dhcpv6Iaid', wintypes.ULONG),
]
class Win32_MIB_IPFORWARDROW(ctypes.Structure):
_fields_ = [
('dwForwardDest', wintypes.DWORD),
('dwForwardMask', wintypes.DWORD),
('dwForwardPolicy', wintypes.DWORD),
('dwForwardNextHop', wintypes.DWORD),
('dwForwardIfIndex', wintypes.DWORD),
('dwForwardType', wintypes.DWORD),
('dwForwardProto', wintypes.DWORD),
('dwForwardAge', wintypes.DWORD),
('dwForwardNextHopAS', wintypes.DWORD),
('dwForwardMetric1', wintypes.DWORD),
('dwForwardMetric2', wintypes.DWORD),
('dwForwardMetric3', wintypes.DWORD),
('dwForwardMetric4', wintypes.DWORD),
('dwForwardMetric5', wintypes.DWORD)
]
class Win32_MIB_IPFORWARDTABLE(ctypes.Structure):
_fields_ = [
('dwNumEntries', wintypes.DWORD),
('table', Win32_MIB_IPFORWARDROW * 1)
]
GetAdaptersAddresses = windll.Iphlpapi.GetAdaptersAddresses
GetAdaptersAddresses.argtypes = [
wintypes.ULONG, wintypes.ULONG, wintypes.LPVOID,
ctypes.POINTER(IP_ADAPTER_ADDRESSES),
ctypes.POINTER(wintypes.ULONG)]
GetAdaptersAddresses.restype = wintypes.ULONG
GetIpForwardTable = windll.Iphlpapi.GetIpForwardTable
GetIpForwardTable.argtypes = [
ctypes.POINTER(Win32_MIB_IPFORWARDTABLE),
ctypes.POINTER(wintypes.ULONG),
wintypes.BOOL]
GetIpForwardTable.restype = wintypes.DWORD

View File

@ -1,85 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import ctypes
from ctypes import windll
from ctypes import wintypes
ERROR_BUFFER_OVERFLOW = 111
ERROR_NO_DATA = 232
class GUID(ctypes.Structure):
_fields_ = [
("data1", wintypes.DWORD),
("data2", wintypes.WORD),
("data3", wintypes.WORD),
("data4", wintypes.BYTE * 8)]
def __init__(self, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8):
self.data1 = l
self.data2 = w1
self.data3 = w2
self.data4[0] = b1
self.data4[1] = b2
self.data4[2] = b3
self.data4[3] = b4
self.data4[4] = b5
self.data4[5] = b6
self.data4[6] = b7
self.data4[7] = b8
class Win32_OSVERSIONINFOEX_W(ctypes.Structure):
_fields_ = [
('dwOSVersionInfoSize', wintypes.DWORD),
('dwMajorVersion', wintypes.DWORD),
('dwMinorVersion', wintypes.DWORD),
('dwBuildNumber', wintypes.DWORD),
('dwPlatformId', wintypes.DWORD),
('szCSDVersion', wintypes.WCHAR * 128),
('wServicePackMajor', wintypes.DWORD),
('wServicePackMinor', wintypes.DWORD),
('wSuiteMask', wintypes.DWORD),
('wProductType', wintypes.BYTE),
('wReserved', wintypes.BYTE)
]
GetLastError = windll.kernel32.GetLastError
GetProcessHeap = windll.kernel32.GetProcessHeap
GetProcessHeap.argtypes = []
GetProcessHeap.restype = wintypes.HANDLE
HeapAlloc = windll.kernel32.HeapAlloc
# Note: wintypes.ULONG must be replaced with a 64 bit variable on x64
HeapAlloc.argtypes = [wintypes.HANDLE, wintypes.DWORD, wintypes.ULONG]
HeapAlloc.restype = wintypes.LPVOID
HeapFree = windll.kernel32.HeapFree
HeapFree.argtypes = [wintypes.HANDLE, wintypes.DWORD, wintypes.LPVOID]
HeapFree.restype = wintypes.BOOL
SetComputerNameExW = windll.kernel32.SetComputerNameExW
VerifyVersionInfoW = windll.kernel32.VerifyVersionInfoW
VerSetConditionMask = windll.kernel32.VerSetConditionMask
VerifyVersionInfoW.argtypes = [
ctypes.POINTER(Win32_OSVERSIONINFOEX_W),
wintypes.DWORD, wintypes.ULARGE_INTEGER]
VerifyVersionInfoW.restype = wintypes.BOOL
VerSetConditionMask.argtypes = [wintypes.ULARGE_INTEGER,
wintypes.DWORD,
wintypes.BYTE]
VerSetConditionMask.restype = wintypes.ULARGE_INTEGER
ERROR_OLD_WIN_VERSION = 1150
VER_MAJORVERSION = 1
VER_MINORVERSION = 2
VER_BUILDNUMBER = 4
VER_GREATER_EQUAL = 3

View File

@ -1,54 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import ctypes
from ctypes import windll
from ctypes import wintypes
AF_UNSPEC = 0
AF_INET = 2
AF_INET6 = 23
VERSION_2_2 = (2 << 8) + 2
class SOCKADDR(ctypes.Structure):
_fields_ = [
('sa_family', wintypes.USHORT),
('sa_data', ctypes.c_char * 14),
]
class WSADATA(ctypes.Structure):
_fields_ = [
('opaque_data', wintypes.BYTE * 400),
]
WSAGetLastError = windll.Ws2_32.WSAGetLastError
WSAGetLastError.argtypes = []
WSAGetLastError.restype = wintypes.INT
WSAStartup = windll.Ws2_32.WSAStartup
WSAStartup.argtypes = [wintypes.WORD, ctypes.POINTER(WSADATA)]
WSAStartup.restype = wintypes.INT
WSACleanup = windll.Ws2_32.WSACleanup
WSACleanup.argtypes = []
WSACleanup.restype = wintypes.INT
WSAAddressToStringW = windll.Ws2_32.WSAAddressToStringW
WSAAddressToStringW.argtypes = [
ctypes.POINTER(SOCKADDR), wintypes.DWORD, wintypes.LPVOID,
wintypes.LPWSTR, ctypes.POINTER(wintypes.DWORD)]
WSAAddressToStringW.restype = wintypes.INT
Ws2_32 = windll.Ws2_32
Ws2_32.inet_ntoa.restype = ctypes.c_char_p
def init_wsa(version=VERSION_2_2):
wsadata = WSADATA()
WSAStartup(version, ctypes.byref(wsadata))

View File

@ -1,54 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
"""Various base classes and implementations for finding *plugins*."""
import abc
import pkgutil
import six
from cloudinit import logging
LOG = logging.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta)
class BaseModuleIterator(object):
"""Base class for describing a *module iterator*
A module iterator is a class that's capable of listing
modules or packages from a specific location, which are
already loaded.
"""
def __init__(self, search_paths):
self._search_paths = search_paths
@abc.abstractmethod
def list_modules(self):
"""List all the modules that this finder knows about."""
class PkgutilModuleIterator(BaseModuleIterator):
"""A class based on the *pkgutil* module for discovering modules."""
@staticmethod
def _find_module(finder, module):
"""Delegate to the *finder* for finding the given module."""
return finder.find_module(module).load_module(module)
def list_modules(self):
"""List all modules that this class knows about."""
for finder, name, _ in pkgutil.walk_packages(self._search_paths):
try:
module = self._find_module(finder, name)
except ImportError:
LOG.debug('Could not import the module %r using the '
'search path %r', name, finder.path)
continue
yield module

View File

@ -1,37 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import copy
class DictRegistry(object):
"""A simple registry for a mapping of objects."""
def __init__(self):
self.reset()
def reset(self):
self._items = {}
def register_item(self, key, item):
"""Add item to the registry."""
if key in self._items:
raise ValueError(
'Item already registered with key {0}'.format(key))
self._items[key] = item
def unregister_item(self, key, force=True):
"""Remove item from the registry."""
if key in self._items:
del self._items[key]
elif not force:
raise KeyError("%s: key not present to unregister" % key)
@property
def registered_items(self):
"""All the items that have been registered.
This cannot be used to modify the contents of the registry.
"""
return copy.copy(self._items)

View File

@ -1,238 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
"""
cloud-init reporting framework
The reporting framework is intended to allow all parts of cloud-init to
report events in a structured manner.
"""
from cloudinit.registry import DictRegistry
from cloudinit.reporting.handlers import available_handlers
FINISH_EVENT_TYPE = 'finish'
START_EVENT_TYPE = 'start'
DEFAULT_CONFIG = {
'logging': {'type': 'log'},
}
class _nameset(set):
def __getattr__(self, name):
if name in self:
return name
raise AttributeError("%s not a valid value" % name)
status = _nameset(("SUCCESS", "WARN", "FAIL"))
class ReportingEvent(object):
"""Encapsulation of event formatting."""
def __init__(self, event_type, name, description):
self.event_type = event_type
self.name = name
self.description = description
def as_string(self):
"""The event represented as a string."""
return '{0}: {1}: {2}'.format(
self.event_type, self.name, self.description)
def as_dict(self):
"""The event represented as a dictionary."""
return {'name': self.name, 'description': self.description,
'event_type': self.event_type}
class FinishReportingEvent(ReportingEvent):
def __init__(self, name, description, result=status.SUCCESS):
super(FinishReportingEvent, self).__init__(
FINISH_EVENT_TYPE, name, description)
self.result = result
if result not in status:
raise ValueError("Invalid result: %s" % result)
def as_string(self):
return '{0}: {1}: {2}: {3}'.format(
self.event_type, self.name, self.result, self.description)
def as_dict(self):
"""The event represented as json friendly."""
data = super(FinishReportingEvent, self).as_dict()
data['result'] = self.result
return data
def update_configuration(config):
"""Update the instanciated_handler_registry.
:param config:
The dictionary containing changes to apply. If a key is given
with a False-ish value, the registered handler matching that name
will be unregistered.
"""
for handler_name, handler_config in config.items():
if not handler_config:
instantiated_handler_registry.unregister_item(
handler_name, force=True)
continue
handler_config = handler_config.copy()
cls = available_handlers.registered_items[handler_config.pop('type')]
instance = cls(**handler_config)
instantiated_handler_registry.register_item(handler_name, instance)
def report_event(event):
"""Report an event to all registered event handlers.
This should generally be called via one of the other functions in
the reporting module.
:param event_type:
The type of the event; this should be a constant from the
reporting module.
"""
for _, handler in instantiated_handler_registry.registered_items.items():
handler.publish_event(event)
def report_finish_event(event_name, event_description,
result=status.SUCCESS):
"""Report a "finish" event.
See :py:func:`.report_event` for parameter details.
"""
event = FinishReportingEvent(event_name, event_description, result)
return report_event(event)
def report_start_event(event_name, event_description):
"""Report a "start" event.
:param event_name:
The name of the event; this should be a topic which events would
share (e.g. it will be the same for start and finish events).
:param event_description:
A human-readable description of the event that has occurred.
"""
event = ReportingEvent(START_EVENT_TYPE, event_name, event_description)
return report_event(event)
class ReportEventStack(object):
"""Context Manager for using :py:func:`report_event`
This enables calling :py:func:`report_start_event` and
:py:func:`report_finish_event` through a context manager.
:param name:
the name of the event
:param description:
the event's description, passed on to :py:func:`report_start_event`
:param message:
the description to use for the finish event. defaults to
:param:description.
:param parent:
:type parent: :py:class:ReportEventStack or None
The parent of this event. The parent is populated with
results of all its children. The name used in reporting
is <parent.name>/<name>
:param reporting_enabled:
Indicates if reporting events should be generated.
If not provided, defaults to the parent's value, or True if no parent
is provided.
:param result_on_exception:
The result value to set if an exception is caught. default
value is FAIL.
"""
def __init__(self, name, description, message=None, parent=None,
reporting_enabled=None, result_on_exception=status.FAIL):
self.parent = parent
self.name = name
self.description = description
self.message = message
self.result_on_exception = result_on_exception
self.result = status.SUCCESS
# use parents reporting value if not provided
if reporting_enabled is None:
if parent:
reporting_enabled = parent.reporting_enabled
else:
reporting_enabled = True
self.reporting_enabled = reporting_enabled
if parent:
self.fullname = '/'.join((parent.fullname, name,))
else:
self.fullname = self.name
self.children = {}
def __repr__(self):
return ("ReportEventStack(%s, %s, reporting_enabled=%s)" %
(self.name, self.description, self.reporting_enabled))
def __enter__(self):
self.result = status.SUCCESS
if self.reporting_enabled:
report_start_event(self.fullname, self.description)
if self.parent:
self.parent.children[self.name] = (None, None)
return self
def _childrens_finish_info(self):
for cand_result in (status.FAIL, status.WARN):
for name, (value, msg) in self.children.items():
if value == cand_result:
return (value, self.message)
return (self.result, self.message)
@property
def result(self):
return self._result
@result.setter
def result(self, value):
if value not in status:
raise ValueError("'%s' not a valid result" % value)
self._result = value
@property
def message(self):
if self._message is not None:
return self._message
return self.description
@message.setter
def message(self, value):
self._message = value
def _finish_info(self, exc):
# return tuple of description, and value
if exc:
return (self.result_on_exception, self.message)
return self._childrens_finish_info()
def __exit__(self, exc_type, exc_value, traceback):
(result, msg) = self._finish_info(exc_value)
if self.parent:
self.parent.children[self.name] = (result, msg)
if self.reporting_enabled:
report_finish_event(self.fullname, msg, result)
instantiated_handler_registry = DictRegistry()
update_configuration(DEFAULT_CONFIG)

View File

@ -1,33 +0,0 @@
import abc
import logging
import six
from cloudinit.registry import DictRegistry
@six.add_metaclass(abc.ABCMeta)
class ReportingHandler(object):
"""Base class for report handlers.
Implement :meth:`~publish_event` for controlling what
the handler does with an event.
"""
@abc.abstractmethod
def publish_event(self, event):
"""Publish an event to the ``INFO`` log level."""
class LogHandler(ReportingHandler):
"""Publishes events to the cloud-init log at the ``INFO`` log level."""
def publish_event(self, event):
"""Publish an event to the ``INFO`` log level."""
logger = logging.getLogger(
'.'.join(['cloudinit', 'reporting', event.event_type, event.name]))
logger.info(event.as_string())
available_handlers = DictRegistry()
available_handlers.register_item('log', LogHandler)

View File

@ -1,40 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import yaml as _yaml
from cloudinit import util
YAMLError = _yaml.YAMLError
def load(path):
"""Load yaml string from a path and return the data represented.
Exception will be raised if types other than the following are found:
dict, int, float, string, list, unicode
"""
return loads(util.load_file(path))
def loads(blob):
"""Load yaml string and return the data represented.
Exception will be raised if types other than the following are found:
dict, int, float, string, list, unicode
"""
return _yaml.safe_load(blob)
def dumps(obj):
"""Dumps an object back into a yaml string."""
formatted = _yaml.safe_dump(obj,
line_break="\n",
indent=4,
explicit_start=True,
explicit_end=True,
default_flow_style=False)
return formatted

View File

@ -1,125 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import argparse
import sys
from cloudinit import logging
from cloudinit.version import version_string
def populate_parser(parser, common, subcommands):
"""Populate an ArgumentParser with data rather than code
This replaces boilerplate code with boilerplate data when populating a
:py:class:`argparse.ArgumentParser`
:param parser:
the :py:mod:`argparse.ArgumentParser` to populate.
:param common:
a :py:func:`list` of tuples. Each tuple is args and kwargs that are
passed onto :py:func:`argparse.ArgumentParser.add_argument`
:param subcommands:
a :py:func:dict of subcommands to add.
The key is added as the subcommand name.
'func' is called to implement the subcommand.
'help' is set up as the subcommands help message
entries in 'opts' are passed onto
:py:func:`argparse.ArgumentParser.add_argument`
"""
for (args, kwargs) in common:
parser.add_argument(*args, **kwargs)
subparsers = parser.add_subparsers()
for subcmd in sorted(subcommands):
val = subcommands[subcmd]
sparser = subparsers.add_parser(subcmd, help=val['help'])
sparser.set_defaults(func=val['func'], name=subcmd)
for (args, kwargs) in val.get('opts', {}):
sparser.add_argument(*args, **kwargs)
def main(args=sys.argv):
parser = argparse.ArgumentParser(prog='cloud-init')
populate_parser(parser, COMMON_ARGS, SUBCOMMANDS)
parsed = parser.parse_args(args[1:])
if not hasattr(parsed, 'func'):
parser.error('too few arguments')
logging.configure_logging(log_to_console=parsed.log_to_console)
parsed.func(parsed)
return 0
def main_version(args):
sys.stdout.write("cloud-init {0}\n".format(version_string()))
def unimplemented_subcommand(args):
raise NotImplementedError(
"sub command '{0}' is not implemented".format(args.name))
COMMON_ARGS = [
(('--log-to-console',), {'action': 'store_true', 'default': False}),
(('--verbose', '-v'), {'action': 'count', 'default': 0}),
]
SUBCOMMANDS = {
# The stages a normal boot takes
'network': {
'func': unimplemented_subcommand,
'help': 'locate and apply networking configuration',
},
'search': {
'func': unimplemented_subcommand,
'help': 'search available data sources',
},
'config': {
'func': unimplemented_subcommand,
'help': 'run available config modules',
},
'config-final': {
'func': unimplemented_subcommand,
'help': 'run "final" config modules',
},
# utility
'version': {
'func': main_version,
'help': 'print cloud-init version',
},
'all': {
'func': unimplemented_subcommand,
'help': 'run all stages as if from boot',
'opts': [
(('--clean',),
{'help': 'clear any prior system state',
'action': 'store_true', 'default': False})],
},
'clean': {
'func': unimplemented_subcommand,
'help': 'clear any prior system state.',
'opts': [
(('-F', '--full'),
{'help': 'be more complete (remove logs).',
'default': False, 'action': 'store_true'}),
],
},
'query': {
'func': unimplemented_subcommand,
'help': 'query system state',
'opts': [
(('--json',),
{'help': 'output in json format',
'action': 'store_true', 'default': False})]
},
}
if __name__ == '__main__':
sys.exit(main())

View File

@ -1,4 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab

View File

@ -1,217 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import abc
import itertools
import six
from cloudinit import exceptions
from cloudinit import logging
from cloudinit import sources
from cloudinit.sources import strategy
LOG = logging.getLogger(__name__)
class APIResponse(object):
"""Holds API response content
To access the content in the binary format, use the
`buffer` attribute, while the unicode content can be
accessed by calling `str` over this (or by accessing
the `decoded_buffer` property).
"""
def __init__(self, buffer, encoding="utf-8"):
self.buffer = buffer
self.encoding = encoding
self._decoded_buffer = None
@property
def decoded_buffer(self):
# Avoid computing this again and again (although multiple threads
# may decode it if they all get in here at the same time, but meh
# thats ok).
if self._decoded_buffer is None:
self._decoded_buffer = self.buffer.decode(self.encoding)
return self._decoded_buffer
def __str__(self):
return self.decoded_buffer
class DataSourceLoader(object):
"""Class for retrieving an available data source instance
:param names:
A list of possible data source names, from which the loader
should pick. This can be used to filter the data sources
that can be found from outside of cloudinit control.
:param module_iterator:
An instance of :class:`cloudinit.plugin_finder.BaseModuleIterator`,
which is used to find possible modules where the data sources
can be found.
:param strategies:
An iterator of search strategy classes, where each strategy is capable
of filtering the data sources that can be used by cloudinit.
Possible strategies includes serial data source search or
parallel data source or filtering data sources according to
some criteria (only network data sources)
"""
def __init__(self, names, module_iterator, strategies):
self._names = names
self._module_iterator = module_iterator
self._strategies = strategies
@staticmethod
def _implements_source_api(module):
"""Check if the given module implements the data source API."""
return hasattr(module, 'data_sources')
def _valid_modules(self):
"""Return all the modules that are *valid*
Valid modules are those that implements a particular API
for declaring the data sources it exports.
"""
modules = self._module_iterator.list_modules()
return filter(self._implements_source_api, modules)
def all_data_sources(self):
"""Get all the data source classes that this finder knows about."""
return itertools.chain.from_iterable(
module.data_sources()
for module in self._valid_modules())
def valid_data_sources(self):
"""Get the data sources that are valid for this run."""
data_sources = self.all_data_sources()
# Instantiate them before passing to the strategies.
data_sources = (data_source() for data_source in data_sources)
for strategy_instance in self._strategies:
data_sources = strategy_instance.search_data_sources(data_sources)
return data_sources
@six.add_metaclass(abc.ABCMeta)
class BaseDataSource(object):
"""Base class for the data sources."""
datasource_config = {}
def __init__(self, config=None):
self._cache = {}
# TODO(cpopa): merge them instead.
self._config = config or self.datasource_config
def _get_cache_data(self, path):
"""Do a metadata lookup for the given *path*
This will return the available metadata under *path*,
while caching the result, so that a next call will not do
an additional API call.
"""
if path not in self._cache:
self._cache[path] = self._get_data(path)
return self._cache[path]
@abc.abstractmethod
def load(self):
"""Try to load this metadata service.
This should return ``True`` if the service was loaded properly,
``False`` otherwise.
"""
@abc.abstractmethod
def _get_data(self, path):
"""Retrieve the metadata exported under the `path` key.
This should return an instance of :class:`APIResponse`.
"""
@abc.abstractmethod
def version(self):
"""Get the version of the current data source."""
def instance_id(self):
"""Get this instance's id."""
def user_data(self):
"""Get the user data available for this instance."""
def vendor_data(self):
"""Get the vendor data available for this instance."""
def host_name(self):
"""Get the hostname available for this instance."""
def public_keys(self):
"""Get the public keys available for this instance."""
def network_config(self):
"""Get the specified network config, if any."""
def admin_password(self):
"""Get the admin password."""
def post_password(self, password):
"""Post the password to the metadata service."""
def can_update_password(self):
"""Check if this data source can update the admin password."""
def is_password_changed(self):
"""Check if the data source has a new password for this instance."""
return False
def is_password_set(self):
"""Check if the password was already posted to the metadata service."""
def get_data_source(names, module_iterator, strategies=None):
"""Get an instance of any data source available.
:param names:
A list of possible data source names, from which the loader
should pick. This can be used to filter the data sources
that can be found from outside of cloudinit control.
:param module_iterator:
A subclass of :class:`cloudinit.plugin_finder.BaseModuleIterator`,
which is used to find possible modules where the data sources
can be found.
:param strategies:
An iterator of search strategy classes, where each strategy is capable
of filtering the data sources that can be used by cloudinit.
"""
if names:
default_strategies = [strategy.FilterNameStrategy(names)]
else:
default_strategies = []
if strategies is None:
strategies = []
strategy_instances = [strategy_cls() for strategy_cls in strategies]
strategies = default_strategies + strategy_instances
iterator = module_iterator(sources.__path__)
loader = DataSourceLoader(names, iterator, strategies)
valid_sources = loader.valid_data_sources()
data_source = next(valid_sources, None)
if not data_source:
raise exceptions.CloudInitError('No available data source found')
return data_source

View File

@ -1,116 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
"""Base classes for interacting with OpenStack data sources."""
import abc
import json
import logging
import os
import six
from cloudinit.sources import base
__all__ = ('BaseOpenStackSource', )
_PAYLOAD_KEY = "content_path"
_ADMIN_PASSWORD = "admin_pass"
LOG = logging.getLogger(__name__)
_OS_LATEST = 'latest'
_OS_FOLSOM = '2012-08-10'
_OS_GRIZZLY = '2013-04-04'
_OS_HAVANA = '2013-10-17'
# Keep this in chronological order. New supported versions go at the end.
_OS_VERSIONS = (
_OS_FOLSOM,
_OS_GRIZZLY,
_OS_HAVANA,
)
@six.add_metaclass(abc.ABCMeta)
class BaseOpenStackSource(base.BaseDataSource):
"""Base classes for interacting with an OpenStack data source.
This is useful for both the HTTP data source, as well for
ConfigDrive.
"""
def __init__(self):
super(BaseOpenStackSource, self).__init__()
self._version = None
@abc.abstractmethod
def _available_versions(self):
"""Get the available metadata versions."""
@abc.abstractmethod
def _path_join(self, path, *addons):
"""Join one or more components together."""
def version(self):
"""Get the underlying data source version."""
return self._version
def _working_version(self):
versions = self._available_versions()
# OS_VERSIONS is stored in chronological order, so
# reverse it to check newest first.
supported = reversed(_OS_VERSIONS)
selected_version = next((version for version in supported
if version in versions), _OS_LATEST)
LOG.debug("Selected version %r from %s", selected_version, versions)
return selected_version
def _get_content(self, name):
path = self._path_join('openstack', 'content', name)
return self._get_cache_data(path)
def _get_meta_data(self):
path = self._path_join('openstack', self._version, 'meta_data.json')
data = self._get_cache_data(path)
if data:
return json.loads(str(data))
def load(self):
self._version = self._working_version()
super(BaseOpenStackSource, self).load()
def user_data(self):
path = self._path_join('openstack', self._version, 'user_data')
return self._get_cache_data(path).buffer
def vendor_data(self):
path = self._path_join('openstack', self._version, 'vendor_data.json')
return self._get_cache_data(path).buffer
def instance_id(self):
return self._get_meta_data().get('uuid')
def host_name(self):
return self._get_meta_data().get('hostname')
def public_keys(self):
public_keys = self._get_meta_data().get('public_keys')
if public_keys:
return list(public_keys.values())
return []
def network_config(self):
network_config = self._get_meta_data().get('network_config')
if not network_config:
return None
if _PAYLOAD_KEY not in network_config:
return None
content_path = network_config[_PAYLOAD_KEY]
content_name = os.path.basename(content_path)
return str(self._get_content(content_name))
def admin_password(self):
meta_data = self._get_meta_data()
meta = meta_data.get('meta', {})
return meta.get(_ADMIN_PASSWORD) or meta_data.get(_ADMIN_PASSWORD)

View File

@ -1,132 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import logging
import os
import posixpath
import re
from cloudinit import exceptions
from cloudinit.osys import base
from cloudinit.sources import base as base_source
from cloudinit.sources.openstack import base as baseopenstack
from cloudinit import url_helper
LOG = logging.getLogger(__name__)
IS_WINDOWS = os.name == 'nt'
# Not necessarily the same as using datetime.strftime,
# but should be enough for our use case.
VERSION_REGEX = re.compile('^\d{4}-\d{2}-\d{2}$')
class HttpOpenStackSource(baseopenstack.BaseOpenStackSource):
"""Class for exporting the HTTP OpenStack data source."""
datasource_config = {
'max_wait': 120,
'timeout': 10,
'metadata_url': 'http://169.254.169.254/',
'post_password_version': '2013-04-04',
'retries': 3,
}
@staticmethod
def _enable_metadata_access(metadata_url):
if IS_WINDOWS:
osutils = base.get_osutils()
osutils.network.set_metadata_ip_route(metadata_url)
@staticmethod
def _path_join(path, *addons):
return posixpath.join(path, *addons)
@staticmethod
def _valid_api_version(version):
if version == 'latest':
return version
return VERSION_REGEX.match(version)
def _available_versions(self):
content = str(self._get_cache_data("openstack"))
versions = list(filter(None, content.splitlines()))
if not versions:
msg = 'No metadata versions were found.'
raise exceptions.CloudInitError(msg)
for version in versions:
if not self._valid_api_version(version):
msg = 'Invalid API version %r' % (version,)
raise exceptions.CloudInitError(msg)
return versions
def _get_data(self, path):
norm_path = self._path_join(self._config['metadata_url'], path)
LOG.debug('Getting metadata from: %s', norm_path)
response = url_helper.wait_any_url([norm_path],
timeout=self._config['timeout'],
max_wait=self._config['max_wait'])
if response:
_, request = response
return base_source.APIResponse(request.contents,
encoding=request.encoding)
msg = "Metadata for url {0} was not accessible in due time"
raise exceptions.CloudInitError(msg.format(norm_path))
def _post_data(self, path, data):
norm_path = self._path_join(self._config['metadata_url'], path)
LOG.debug('Posting metadata to: %s', norm_path)
url_helper.read_url(norm_path, data=data,
retries=self._config['retries'],
timeout=self._config['timeout'])
@property
def _password_path(self):
return 'openstack/%s/password' % self._version
def load(self):
metadata_url = self._config['metadata_url']
self._enable_metadata_access(metadata_url)
super(HttpOpenStackSource, self).load()
try:
self._get_meta_data()
return True
except Exception:
LOG.warning('Metadata not found at URL %r', metadata_url)
return False
def can_update_password(self):
"""Check if the password can be posted for the current data source."""
password = map(int, self._config['post_password_version'].split("-"))
if self._version == 'latest':
current = (0, )
else:
current = map(int, self._version.split("-"))
return tuple(current) >= tuple(password)
@property
def is_password_set(self):
path = self._password_path
content = self._get_cache_data(path).buffer
return len(content) > 0
def post_password(self, password):
try:
self._post_data(self._password_path, password)
return True
except url_helper.UrlError as ex:
if ex.status_code == url_helper.CONFLICT:
# Password already set
return False
else:
raise
def data_sources():
"""Get the data sources exported in this module."""
return (HttpOpenStackSource,)

View File

@ -1,98 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import abc
import six
from cloudinit import logging
LOG = logging.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta)
class BaseSearchStrategy(object):
"""Declare search strategies for data sources
A *search strategy* represents a decoupled way of choosing
one or more data sources from a list of data sources.
Each strategy can be used interchangeably and they can
be composed. For instance, once can apply a filtering strategy
over a parallel search strategy, which looks for the available
data sources.
"""
@abc.abstractmethod
def search_data_sources(self, data_sources):
"""Search the possible data sources for this strategy
The method should filter the data sources that can be
considered *valid* for the given strategy.
:param data_sources:
An iterator of data source instances, where the lookup
will be done.
"""
@staticmethod
def is_datasource_available(data_source):
"""Check if the given *data_source* is considered *available*
A data source is considered available if it can be loaded,
but other strategies could implement their own behaviour.
"""
try:
if data_source.load():
return True
except Exception:
LOG.error("Failed to load data source %r", data_source)
return False
class FilterNameStrategy(BaseSearchStrategy):
"""A strategy for filtering data sources by name
:param names:
A list of strings, where each string is a name for a possible
data source. Only the data sources that are in this list will
be loaded and filtered.
"""
def __init__(self, names=None):
self._names = names
super(FilterNameStrategy, self).__init__()
def search_data_sources(self, data_sources):
return (source for source in data_sources
if source.__class__.__name__ in self._names)
class SerialSearchStrategy(BaseSearchStrategy):
"""A strategy that chooses a data source in serial."""
def search_data_sources(self, data_sources):
for data_source in data_sources:
if self.is_datasource_available(data_source):
yield data_source
class FilterVersionStrategy(BaseSearchStrategy):
"""A strategy for filtering data sources by their version
:param versions:
A list of strings, where each strings is a possible
version that a data source can have.
"""
def __init__(self, versions=None):
if versions is None:
versions = []
self._versions = versions
super(FilterVersionStrategy, self).__init__()
def search_data_sources(self, data_sources):
return (source for source in data_sources
if source.version() in self._versions)

View File

@ -1,107 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import collections
import os
import re
try:
import jinja2
from jinja2 import Template as JTemplate
JINJA_AVAILABLE = True
except (ImportError, AttributeError):
JINJA_AVAILABLE = False # noqa
from cloudinit import logging
LOG = logging.getLogger(__name__)
TYPE_MATCHER = re.compile(r"##\s*template:(.*)", re.I)
BASIC_MATCHER = re.compile(r'\$\{([A-Za-z0-9_.]+)\}|\$([A-Za-z0-9_.]+)')
def basic_render(content, params):
"""This does simple replacement of bash variable like templates.
It identifies patterns like ${a} or $a and can also identify patterns like
${a.b} or $a.b which will look for a key 'b' in the dictionary rooted
by key 'a'.
"""
def replacer(match):
# Only 1 of the 2 groups will actually have a valid entry.
name = match.group(1) or match.group(2)
if name is None:
# not sure how this can possibly occur
raise RuntimeError("Match encountered but no valid group present")
path = collections.deque(name.split("."))
selected_params = params
while len(path) > 1:
key = path.popleft()
if not isinstance(selected_params, dict):
raise TypeError(
"Can not traverse into non-dictionary '%s' of type %s "
"while looking for subkey '%s'" %
(selected_params, type(selected_params), key))
selected_params = selected_params[key]
key = path.popleft()
if not isinstance(selected_params, dict):
raise TypeError("Can not extract key '%s' from non-dictionary"
" '%s' of type %s"
% (key, selected_params, type(selected_params)))
return str(selected_params[key])
return BASIC_MATCHER.sub(replacer, content)
def detect_template(text):
def jinja_render(content, params):
# keep_trailing_newline is in jinja2 2.7+, not 2.6
add = "\n" if content.endswith("\n") else ""
return JTemplate(content,
undefined=jinja2.StrictUndefined,
trim_blocks=True).render(**params) + add
if "\n" in text:
ident, rest = text.split("\n", 1)
else:
ident = text
rest = ''
type_match = TYPE_MATCHER.match(ident)
if not type_match:
return ('basic', basic_render, text)
else:
template_type = type_match.group(1).lower().strip()
if template_type not in ('jinja', 'basic'):
raise ValueError("Unknown template rendering type '%s' requested"
% template_type)
if template_type == 'jinja' and not JINJA_AVAILABLE:
raise ValueError("Template requested jinja as renderer, but Jinja "
"is not available.")
elif template_type == 'jinja' and JINJA_AVAILABLE:
return ('jinja', jinja_render, rest)
# Only thing left over is the basic renderer (it is always available).
return ('basic', basic_render, rest)
def render_from_file(fn, params, encoding='utf-8'):
with open(fn, 'rb') as fh:
content = fh.read()
content = content.decode(encoding)
_, renderer, content = detect_template(content)
return renderer(content, params)
def render_to_file(fn, outfn, params, mode=0o644, encoding='utf-8'):
contents = render_from_file(fn, params, encoding=encoding)
with open(outfn, 'wb') as fh:
fh.write(contents.encode(encoding))
os.chmod(outfn, mode)
def render_string(content, params):
_, renderer, content = detect_template(content)
return renderer(content, params)

View File

@ -1,21 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import httpretty
import testtools
class TestCase(testtools.TestCase):
"""Base class for all cloud-init test cases."""
def setUp(self):
super(TestCase, self).setUp()
# Do not allow any unknown network connections to get triggered...
httpretty.HTTPretty.allow_net_connect = False
def tearDown(self):
super(TestCase, self).tearDown()
# Ok allow it again....
httpretty.HTTPretty.allow_net_connect = True

View File

@ -1,52 +0,0 @@
# Copyright (C) 2015 Canonical Ltd.
# Copyright 2015 Cloudbase Solutions Srl
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cloudinit.osys import base
from cloudinit.tests import TestCase
from cloudinit.tests.util import mock
class TestOSUtils(TestCase):
@mock.patch('importlib.import_module')
@mock.patch('platform.linux_distribution')
@mock.patch('platform.system')
def _test_getosutils(self, mock_system,
mock_linux_distribution, mock_import_module,
linux=False):
if linux:
os_name = 'Linux'
mock_linux_distribution.return_value = (os_name, None, None)
else:
os_name = 'Windows'
mock_system.return_value = os_name
mock_linux_distribution.return_value = (None, None, None)
module = base.get_osutils()
mock_import_module.assert_called_once_with(
"cloudinit.osys.{0}.base".format(os_name.lower()))
self.assertEqual(mock_import_module.return_value.OSUtils,
module)
if linux:
mock_linux_distribution.assert_called_once_with()
self.assertFalse(mock_system.called)
else:
mock_linux_distribution.assert_called_once_with()
mock_system.assert_called_once_with()
def test_getosutils(self):
self._test_getosutils(linux=True)
self._test_getosutils(linux=False)

View File

@ -1,72 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import importlib
from cloudinit import exceptions
from cloudinit.tests import TestCase
from cloudinit.tests.util import mock
class TestWindowsGeneral(TestCase):
def setUp(self):
super(TestWindowsGeneral, self).setUp()
self._ctypes_mock = mock.Mock()
self._util_mock = mock.MagicMock()
self._module_patcher = mock.patch.dict(
'sys.modules',
{'ctypes': self._ctypes_mock,
'cloudinit.osys.windows.util': self._util_mock})
self._module_patcher.start()
self._general_module = importlib.import_module(
"cloudinit.osys.windows.general")
self._kernel32 = self._general_module.kernel32
self._general = self._general_module.General()
def tearDown(self):
super(TestWindowsGeneral, self).tearDown()
self._module_patcher.stop()
def _test_check_os_version(self, ret_value, error_value=None):
verset_return = 2
self._kernel32.VerSetConditionMask.return_value = (
verset_return)
self._kernel32.VerifyVersionInfoW.return_value = ret_value
self._kernel32.GetLastError.return_value = error_value
old_version = self._kernel32.ERROR_OLD_WIN_VERSION
if error_value and error_value is not old_version:
self.assertRaises(exceptions.CloudInitError,
self._general.check_os_version, 3, 1, 2)
self._kernel32.GetLastError.assert_called_once_with()
else:
response = self._general.check_os_version(3, 1, 2)
self._ctypes_mock.sizeof.assert_called_once_with(
self._kernel32.Win32_OSVERSIONINFOEX_W)
self.assertEqual(
3, self._kernel32.VerSetConditionMask.call_count)
mask = (self._kernel32.VER_MAJORVERSION |
self._kernel32.VER_MINORVERSION |
self._kernel32.VER_BUILDNUMBER)
self._kernel32.VerifyVersionInfoW.assert_called_with(
self._ctypes_mock.byref.return_value, mask, verset_return)
if error_value is old_version:
self._kernel32.GetLastError.assert_called_with()
self.assertFalse(response)
else:
self.assertTrue(response)
def test_check_os_version(self):
m = mock.MagicMock()
self._test_check_os_version(ret_value=m)
def test_check_os_version_expect_false(self):
self._test_check_os_version(
ret_value=None, error_value=self._kernel32.ERROR_OLD_WIN_VERSION)

View File

@ -1,361 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import importlib
import subprocess
from cloudinit import exceptions
from cloudinit import tests
from cloudinit.tests.util import LogSnatcher
from cloudinit.tests.util import mock
class TestNetworkWindows(tests.TestCase):
def setUp(self):
super(TestNetworkWindows, self).setUp()
self._ctypes_mock = mock.MagicMock()
self._winreg_mock = mock.Mock()
self._win32com_mock = mock.Mock()
self._wmi_mock = mock.Mock()
self._module_patcher = mock.patch.dict(
'sys.modules',
{'ctypes': self._ctypes_mock,
'win32com': self._win32com_mock,
'wmi': self._wmi_mock,
'six.moves.winreg': self._winreg_mock})
self._module_patcher.start()
self._iphlpapi = mock.Mock()
self._kernel32 = mock.Mock()
self._ws2_32 = mock.Mock()
self._network_module = importlib.import_module(
'cloudinit.osys.windows.network')
self._network_module.iphlpapi = self._iphlpapi
self._network_module.kernel32 = self._kernel32
self._network_module.ws2_32 = self._ws2_32
self._network = self._network_module.Network()
def tearDown(self):
super(TestNetworkWindows, self).tearDown()
self._module_patcher.stop()
def _test__heap_alloc(self, fail):
mock_heap = mock.Mock()
mock_size = mock.Mock()
if fail:
self._kernel32.HeapAlloc.return_value = None
e = self.assertRaises(exceptions.CloudInitError,
self._network_module._heap_alloc,
mock_heap, mock_size)
self.assertEqual('Unable to allocate memory for the IP '
'forward table', str(e))
else:
result = self._network_module._heap_alloc(mock_heap, mock_size)
self.assertEqual(self._kernel32.HeapAlloc.return_value, result)
self._kernel32.HeapAlloc.assert_called_once_with(
mock_heap, 0, self._ctypes_mock.c_size_t(mock_size.value))
def test__heap_alloc_error(self):
self._test__heap_alloc(fail=True)
def test__heap_alloc_no_error(self):
self._test__heap_alloc(fail=False)
def _check_raises_forward(self):
with self._network._get_forward_table():
pass
def test__get_forward_table_no_memory(self):
self._network_module._heap_alloc = mock.Mock()
error_msg = 'Unable to allocate memory for the IP forward table'
exc = exceptions.CloudInitError(error_msg)
self._network_module._heap_alloc.side_effect = exc
e = self.assertRaises(exceptions.CloudInitError,
self._check_raises_forward)
self.assertEqual(error_msg, str(e))
self._network_module._heap_alloc.assert_called_once_with(
self._kernel32.GetProcessHeap.return_value,
self._ctypes_mock.wintypes.ULONG.return_value)
def test__get_forward_table_insufficient_buffer_no_memory(self):
self._kernel32.HeapAlloc.side_effect = (mock.sentinel.table_mem, None)
self._iphlpapi.GetIpForwardTable.return_value = (
self._iphlpapi.ERROR_INSUFFICIENT_BUFFER)
self.assertRaises(exceptions.CloudInitError,
self._check_raises_forward)
table = self._ctypes_mock.cast.return_value
self._iphlpapi.GetIpForwardTable.assert_called_once_with(
table,
self._ctypes_mock.byref.return_value, 0)
heap_calls = [
mock.call(self._kernel32.GetProcessHeap.return_value, 0, table),
mock.call(self._kernel32.GetProcessHeap.return_value, 0, table)
]
self.assertEqual(heap_calls, self._kernel32.HeapFree.mock_calls)
def _test__get_forward_table(self, reallocation=False,
insufficient_buffer=False,
fail=False):
if fail:
e = self.assertRaises(exceptions.CloudInitError,
self._check_raises_forward)
msg = ('Unable to get IP forward table. Error: %s'
% mock.sentinel.error)
self.assertEqual(msg, str(e))
else:
with self._network._get_forward_table() as table:
pass
pointer = self._ctypes_mock.POINTER(
self._iphlpapi.Win32_MIB_IPFORWARDTABLE)
expected_forward_table = self._ctypes_mock.cast(
self._kernel32.HeapAlloc.return_value, pointer)
self.assertEqual(expected_forward_table, table)
heap_calls = [
mock.call(self._kernel32.GetProcessHeap.return_value, 0,
self._ctypes_mock.cast.return_value)
]
forward_calls = [
mock.call(self._ctypes_mock.cast.return_value,
self._ctypes_mock.byref.return_value, 0),
]
if insufficient_buffer:
# We expect two calls for GetIpForwardTable
forward_calls.append(forward_calls[0])
if reallocation:
heap_calls.append(heap_calls[0])
self.assertEqual(heap_calls, self._kernel32.HeapFree.mock_calls)
self.assertEqual(forward_calls,
self._iphlpapi.GetIpForwardTable.mock_calls)
def test__get_forward_table_sufficient_buffer(self):
self._iphlpapi.GetIpForwardTable.return_value = None
self._test__get_forward_table()
def test__get_forward_table_insufficient_buffer_reallocate(self):
self._kernel32.HeapAlloc.side_effect = (
mock.sentinel.table_mem, mock.sentinel.table_mem)
self._iphlpapi.GetIpForwardTable.side_effect = (
self._iphlpapi.ERROR_INSUFFICIENT_BUFFER, None)
self._test__get_forward_table(reallocation=True,
insufficient_buffer=True)
def test__get_forward_table_insufficient_buffer_other_error(self):
self._kernel32.HeapAlloc.side_effect = (
mock.sentinel.table_mem, mock.sentinel.table_mem)
self._iphlpapi.GetIpForwardTable.side_effect = (
self._iphlpapi.ERROR_INSUFFICIENT_BUFFER, mock.sentinel.error)
self._test__get_forward_table(reallocation=True,
insufficient_buffer=True,
fail=True)
@mock.patch('cloudinit.osys.windows.network.Network.routes')
def test_default_gateway_no_gateway(self, mock_routes):
mock_routes.return_value = iter((mock.Mock(), mock.Mock()))
self.assertIsNone(self._network.default_gateway())
mock_routes.assert_called_once_with()
@mock.patch('cloudinit.osys.windows.network.Network.routes')
def test_default_gateway(self, mock_routes):
default_gateway = mock.Mock()
default_gateway.destination = '0.0.0.0'
mock_routes.return_value = iter((mock.Mock(), default_gateway))
gateway = self._network.default_gateway()
self.assertEqual(default_gateway, gateway)
def test_route_is_static(self):
bad_route = self._network_module.Route(
destination=None, netmask=None,
gateway=None, interface=None, metric=None,
flags=404)
good_route = self._network_module.Route(
destination=None, netmask=None,
gateway=None, interface=None, metric=None,
flags=self._network_module.MIB_IPPROTO_NETMGMT)
self.assertTrue(good_route.is_static)
self.assertFalse(bad_route.is_static)
@mock.patch('subprocess.Popen')
def _test_route_add(self, mock_popen, err):
mock_route = mock.Mock()
mock_route.destination = mock.sentinel.destination
mock_route.netmask = mock.sentinel.netmask
mock_route.gateway = mock.sentinel.gateway
args = ['ROUTE', 'ADD', mock.sentinel.destination,
'MASK', mock.sentinel.netmask,
mock.sentinel.gateway]
mock_popen.return_value.returncode = err
mock_popen.return_value.communicate.return_value = (None, err)
if err:
e = self.assertRaises(exceptions.CloudInitError,
self._network_module.Route.add,
mock_route)
msg = "Unable to add route: %s" % err
self.assertEqual(msg, str(e))
else:
self._network_module.Route.add(mock_route)
mock_popen.assert_called_once_with(args, shell=False,
stderr=subprocess.PIPE)
def test_route_add_fails(self):
self._test_route_add(err=1)
def test_route_add_works(self):
self._test_route_add(err=0)
@mock.patch('cloudinit.osys.windows.network.Network._get_forward_table')
def test_routes(self, mock_forward_table):
def _same(arg):
return arg._mock_name.encode()
route = mock.MagicMock()
mock_cast_result = mock.Mock()
mock_cast_result.contents = [route]
self._ctypes_mock.cast.return_value = mock_cast_result
self._network_module.ws2_32.Ws2_32.inet_ntoa.side_effect = _same
route.dwForwardIfIndex = 'dwForwardIfIndex'
route.dwForwardProto = 'dwForwardProto'
route.dwForwardMetric1 = 'dwForwardMetric1'
routes = self._network.routes()
mock_forward_table.assert_called_once_with()
enter = mock_forward_table.return_value.__enter__
enter.assert_called_once_with()
exit_ = mock_forward_table.return_value.__exit__
exit_.assert_called_once_with(None, None, None)
self.assertEqual(1, len(routes))
given_route = routes[0]
self.assertEqual('dwForwardDest', given_route.destination)
self.assertEqual('dwForwardNextHop', given_route.gateway)
self.assertEqual('dwForwardMask', given_route.netmask)
self.assertEqual('dwForwardIfIndex', given_route.interface)
self.assertEqual('dwForwardMetric1', given_route.metric)
self.assertEqual('dwForwardProto', given_route.flags)
@mock.patch('cloudinit.osys.base.get_osutils')
@mock.patch('cloudinit.osys.windows.network.Network.routes')
def test_set_metadata_ip_route_not_called(self, mock_routes,
mock_osutils):
general = mock_osutils.return_value.general
general.check_os_version.return_value = False
self._network.set_metadata_ip_route(mock.sentinel.url)
self.assertFalse(mock_routes.called)
general.check_os_version.assert_called_once_with(6, 0)
@mock.patch('cloudinit.osys.base.get_osutils')
@mock.patch('cloudinit.osys.windows.network.Network.routes')
def test_set_metadata_ip_route_not_invalid_url(self, mock_routes,
mock_osutils):
general = mock_osutils.return_value.general
general.check_os_version.return_value = True
self._network.set_metadata_ip_route("http://169.253.169.253")
self.assertFalse(mock_routes.called)
general.check_os_version.assert_called_once_with(6, 0)
@mock.patch('cloudinit.osys.base.get_osutils')
@mock.patch('cloudinit.osys.windows.network.Network.routes')
@mock.patch('cloudinit.osys.windows.network.Network.default_gateway')
def test_set_metadata_ip_route_route_already_exists(
self, mock_default_gateway, mock_routes, mock_osutils):
mock_route = mock.Mock()
mock_route.destination = "169.254.169.254"
mock_routes.return_value = (mock_route, )
self._network.set_metadata_ip_route("http://169.254.169.254")
self.assertTrue(mock_routes.called)
self.assertFalse(mock_default_gateway.called)
@mock.patch('cloudinit.osys.base.get_osutils')
@mock.patch('cloudinit.osys.windows.network._check_url')
@mock.patch('cloudinit.osys.windows.network.Network.routes')
@mock.patch('cloudinit.osys.windows.network.Network.default_gateway')
def test_set_metadata_ip_route_route_missing_url_accessible(
self, mock_default_gateway, mock_routes,
mock_check_url, mock_osutils):
mock_routes.return_value = ()
mock_check_url.return_value = True
self._network.set_metadata_ip_route("http://169.254.169.254")
self.assertTrue(mock_routes.called)
self.assertFalse(mock_default_gateway.called)
self.assertTrue(mock_osutils.called)
@mock.patch('cloudinit.osys.base.get_osutils')
@mock.patch('cloudinit.osys.windows.network._check_url')
@mock.patch('cloudinit.osys.windows.network.Network.routes')
@mock.patch('cloudinit.osys.windows.network.Network.default_gateway')
@mock.patch('cloudinit.osys.windows.network.Route')
def test_set_metadata_ip_route_no_default_gateway(
self, mock_Route, mock_default_gateway,
mock_routes, mock_check_url, mock_osutils):
mock_routes.return_value = ()
mock_check_url.return_value = False
mock_default_gateway.return_value = None
self._network.set_metadata_ip_route("http://169.254.169.254")
self.assertTrue(mock_osutils.called)
self.assertTrue(mock_routes.called)
self.assertTrue(mock_default_gateway.called)
self.assertFalse(mock_Route.called)
@mock.patch('cloudinit.osys.base.get_osutils')
@mock.patch('cloudinit.osys.windows.network._check_url')
@mock.patch('cloudinit.osys.windows.network.Network.routes')
@mock.patch('cloudinit.osys.windows.network.Network.default_gateway')
@mock.patch('cloudinit.osys.windows.network.Route')
def test_set_metadata_ip_route(
self, mock_Route, mock_default_gateway,
mock_routes, mock_check_url, mock_osutils):
mock_routes.return_value = ()
mock_check_url.return_value = False
with LogSnatcher('cloudinit.osys.windows.network') as snatcher:
self._network.set_metadata_ip_route("http://169.254.169.254")
expected = ['Setting gateway for host: 169.254.169.254']
self.assertEqual(expected, snatcher.output)
self.assertTrue(mock_routes.called)
self.assertTrue(mock_default_gateway.called)
mock_Route.assert_called_once_with(
destination="169.254.169.254",
netmask="255.255.255.255",
gateway=mock_default_gateway.return_value.gateway,
interface=None, metric=None)
mock_Route.add.assert_called_once_with(mock_Route.return_value)
self.assertTrue(mock_osutils.called)

View File

@ -1,176 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
from cloudinit.sources import base as base_source
from cloudinit.sources.openstack import base
from cloudinit import tests
from cloudinit.tests.util import LogSnatcher
from cloudinit.tests.util import mock
class TestBaseOpenStackSource(tests.TestCase):
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'__abstractmethods__', new=())
def setUp(self):
self._source = base.BaseOpenStackSource()
super(TestBaseOpenStackSource, self).setUp()
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_available_versions')
def _test_working_version(self, mock_available_versions,
versions, expected_version):
mock_available_versions.return_value = versions
with LogSnatcher('cloudinit.sources.openstack.base') as snatcher:
version = self._source._working_version()
msg = "Selected version '{0}' from {1}"
expected_logging = [msg.format(expected_version, versions)]
self.assertEqual(expected_logging, snatcher.output)
self.assertEqual(expected_version, version)
def test_working_version_latest(self):
self._test_working_version(versions=(), expected_version='latest')
def test_working_version_other_version(self):
versions = (
base._OS_FOLSOM,
base._OS_GRIZZLY,
base._OS_HAVANA,
)
self._test_working_version(versions=versions,
expected_version=base._OS_HAVANA)
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_get_meta_data')
def test_metadata_capabilities(self, mock_get_meta_data):
mock_get_meta_data.return_value = {
'uuid': mock.sentinel.id,
'hostname': mock.sentinel.hostname,
'public_keys': {'key-one': 'key-one', 'key-two': 'key-two'},
}
instance_id = self._source.instance_id()
hostname = self._source.host_name()
public_keys = self._source.public_keys()
self.assertEqual(mock.sentinel.id, instance_id)
self.assertEqual(mock.sentinel.hostname, hostname)
self.assertEqual(["key-one", "key-two"], sorted(public_keys))
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_get_meta_data')
def test_no_public_keys(self, mock_get_meta_data):
mock_get_meta_data.return_value = {'public_keys': []}
public_keys = self._source.public_keys()
self.assertEqual([], public_keys)
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_get_meta_data')
def test_admin_password(self, mock_get_meta_data):
mock_get_meta_data.return_value = {
'meta': {base._ADMIN_PASSWORD: mock.sentinel.password}
}
password = self._source.admin_password()
self.assertEqual(mock.sentinel.password, password)
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_path_join')
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_get_cache_data')
def test_get_content(self, mock_get_cache_data, mock_path_join):
result = self._source._get_content(mock.sentinel.name)
mock_path_join.assert_called_once_with(
'openstack', 'content', mock.sentinel.name)
mock_get_cache_data.assert_called_once_with(
mock_path_join.return_value)
self.assertEqual(mock_get_cache_data.return_value, result)
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_path_join')
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_get_cache_data')
def test_user_data(self, mock_get_cache_data, mock_path_join):
result = self._source.user_data()
mock_path_join.assert_called_once_with(
'openstack', self._source._version, 'user_data')
mock_get_cache_data.assert_called_once_with(
mock_path_join.return_value)
self.assertEqual(mock_get_cache_data.return_value.buffer, result)
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_path_join')
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_get_cache_data')
def test_get_metadata(self, mock_get_cache_data, mock_path_join):
mock_get_cache_data.return_value = base_source.APIResponse(b"{}")
result = self._source._get_meta_data()
mock_path_join.assert_called_once_with(
'openstack', self._source._version, 'meta_data.json')
mock_get_cache_data.assert_called_once_with(
mock_path_join.return_value)
self.assertEqual({}, result)
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_path_join')
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_get_cache_data')
def test_vendor_data(self, mock_get_cache_data, mock_path_join):
result = self._source.vendor_data()
mock_path_join.assert_called_once_with(
'openstack', self._source._version, 'vendor_data.json')
mock_get_cache_data.assert_called_once_with(
mock_path_join.return_value)
self.assertEqual(mock_get_cache_data.return_value.buffer, result)
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_working_version')
def test_load(self, mock_working_version):
self._source.load()
self.assertTrue(mock_working_version.called)
self.assertEqual(mock_working_version.return_value,
self._source._version)
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_get_meta_data')
def test_network_config_no_config(self, mock_get_metadata):
mock_get_metadata.return_value = {}
self.assertIsNone(self._source.network_config())
mock_get_metadata.return_value = {1: 2}
self.assertIsNone(self._source.network_config())
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_get_meta_data')
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_get_content')
def test_network_config(self, mock_get_content, mock_get_metadata):
mock_get_metadata.return_value = {
"network_config": {base._PAYLOAD_KEY: "content_path"}
}
result = self._source.network_config()
mock_get_content.assert_called_once_with("content_path")
self.assertEqual(str(mock_get_content.return_value), result)
@mock.patch('cloudinit.sources.openstack.base.BaseOpenStackSource.'
'_get_data')
def test_get_cache_data(self, mock_get_data):
mock_get_data.return_value = b'test'
result = self._source._get_cache_data(mock.sentinel.path)
mock_get_data.assert_called_once_with(mock.sentinel.path)
self.assertEqual(b'test', result)

View File

@ -1,251 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import textwrap
from six.moves import http_client
from cloudinit import exceptions
from cloudinit.sources import base
from cloudinit.sources.openstack import httpopenstack
from cloudinit import tests
from cloudinit.tests.util import LogSnatcher
from cloudinit.tests.util import mock
from cloudinit import url_helper
class TestHttpOpenStackSource(tests.TestCase):
def setUp(self):
self._source = httpopenstack.HttpOpenStackSource()
super(TestHttpOpenStackSource, self).setUp()
@mock.patch.object(httpopenstack, 'IS_WINDOWS', new=False)
@mock.patch('cloudinit.osys.windows.network.Network.'
'set_metadata_ip_route')
def test__enable_metadata_access_not_nt(self, mock_set_metadata_ip_route):
self._source._enable_metadata_access(mock.sentinel.metadata_url)
self.assertFalse(mock_set_metadata_ip_route.called)
@mock.patch.object(httpopenstack, 'IS_WINDOWS', new=True)
@mock.patch('cloudinit.osys.base.get_osutils')
def test__enable_metadata_access_nt(self, mock_get_osutils):
self._source._enable_metadata_access(mock.sentinel.metadata_url)
mock_get_osutils.assert_called_once_with()
osutils = mock_get_osutils.return_value
osutils.network.set_metadata_ip_route.assert_called_once_with(
mock.sentinel.metadata_url)
def test__path_join(self):
calls = [
(('path', 'a', 'b'), 'path/a/b'),
(('path', ), 'path'),
(('path/', 'b/'), 'path/b/'),
]
for arguments, expected in calls:
path = self._source._path_join(*arguments)
self.assertEqual(expected, path)
@mock.patch('cloudinit.sources.openstack.httpopenstack.'
'HttpOpenStackSource._get_cache_data')
def test__available_versions(self, mock_get_cache_data):
mock_get_cache_data.return_value = textwrap.dedent("""
2013-02-02
2014-04-04
2015-05-05
latest""")
versions = self._source._available_versions()
expected = ['2013-02-02', '2014-04-04', '2015-05-05', 'latest']
mock_get_cache_data.assert_called_once_with("openstack")
self.assertEqual(expected, versions)
@mock.patch('cloudinit.sources.openstack.httpopenstack.'
'HttpOpenStackSource._get_cache_data')
def _test__available_versions_invalid_versions(
self, version, mock_get_cache_data):
mock_get_cache_data.return_value = version
exc = self.assertRaises(exceptions.CloudInitError,
self._source._available_versions)
expected = 'Invalid API version %r' % (version,)
self.assertEqual(expected, str(exc))
def test__available_versions_invalid_versions(self):
versions = ['2013-no-worky', '2012', '2012-02',
'lates', '20004-111-222', '2004-11-11111',
' 2004-11-20']
for version in versions:
self._test__available_versions_invalid_versions(version)
@mock.patch('cloudinit.sources.openstack.httpopenstack.'
'HttpOpenStackSource._get_cache_data')
def test__available_versions_no_version_found(self, mock_get_cache_data):
mock_get_cache_data.return_value = ''
exc = self.assertRaises(exceptions.CloudInitError,
self._source._available_versions)
self.assertEqual('No metadata versions were found.', str(exc))
@mock.patch('cloudinit.sources.openstack.httpopenstack.'
'HttpOpenStackSource._get_cache_data')
def _test_is_password_set(self, mock_get_cache_data, data, expected):
mock_get_cache_data.return_value = data
result = self._source.is_password_set
self.assertEqual(expected, result)
mock_get_cache_data.assert_called_once_with(
self._source._password_path)
def test_is_password_set(self):
empty_data = base.APIResponse(b"")
non_empty_data = base.APIResponse(b"password")
self._test_is_password_set(data=empty_data, expected=False)
self._test_is_password_set(data=non_empty_data, expected=True)
def _test_can_update_password(self, version, expected):
with mock.patch.object(self._source, '_version', new=version):
self.assertEqual(self._source.can_update_password(), expected)
def test_can_update_password(self):
self._test_can_update_password('2012-08-10', expected=False)
self._test_can_update_password('2012-11-10', expected=False)
self._test_can_update_password('2013-04-04', expected=True)
self._test_can_update_password('2014-04-04', expected=True)
self._test_can_update_password('latest', expected=False)
@mock.patch('cloudinit.sources.openstack.httpopenstack.'
'HttpOpenStackSource._path_join')
@mock.patch('cloudinit.url_helper.read_url')
def test__post_data(self, mock_read_url, mock_path_join):
with LogSnatcher('cloudinit.sources.openstack.'
'httpopenstack') as snatcher:
self._source._post_data(mock.sentinel.path,
mock.sentinel.data)
expected_logging = [
'Posting metadata to: %s' % mock_path_join.return_value
]
self.assertEqual(expected_logging, snatcher.output)
mock_path_join.assert_called_once_with(
self._source._config['metadata_url'], mock.sentinel.path)
mock_read_url.assert_called_once_with(
mock_path_join.return_value, data=mock.sentinel.data,
retries=self._source._config['retries'],
timeout=self._source._config['timeout'])
@mock.patch('cloudinit.sources.openstack.httpopenstack.'
'HttpOpenStackSource._post_data')
def test_post_password(self, mock_post_data):
self.assertTrue(self._source.post_password(mock.sentinel.password))
mock_post_data.assert_called_once_with(
self._source._password_path, mock.sentinel.password)
@mock.patch('cloudinit.sources.openstack.httpopenstack.'
'HttpOpenStackSource._post_data')
def test_post_password_already_posted(self, mock_post_data):
exc = url_helper.UrlError(None)
exc.status_code = http_client.CONFLICT
mock_post_data.side_effect = exc
self.assertFalse(self._source.post_password(mock.sentinel.password))
mock_post_data.assert_called_once_with(
self._source._password_path, mock.sentinel.password)
@mock.patch('cloudinit.sources.openstack.httpopenstack.'
'HttpOpenStackSource._post_data')
def test_post_password_other_error(self, mock_post_data):
exc = url_helper.UrlError(None)
exc.status_code = http_client.NOT_FOUND
mock_post_data.side_effect = exc
self.assertRaises(url_helper.UrlError,
self._source.post_password,
mock.sentinel.password)
mock_post_data.assert_called_once_with(
self._source._password_path, mock.sentinel.password)
@mock.patch('cloudinit.sources.openstack.base.'
'BaseOpenStackSource.load')
@mock.patch('cloudinit.sources.openstack.httpopenstack.'
'HttpOpenStackSource._get_meta_data')
@mock.patch('cloudinit.sources.openstack.httpopenstack.'
'HttpOpenStackSource._enable_metadata_access')
def _test_load(self, mock_enable_metadata_access,
mock_get_metadata, mock_load, expected,
expected_logging, metadata_side_effect=None):
mock_get_metadata.side_effect = metadata_side_effect
with LogSnatcher('cloudinit.sources.openstack.'
'httpopenstack') as snatcher:
response = self._source.load()
self.assertEqual(expected, response)
mock_enable_metadata_access.assert_called_once_with(
self._source._config['metadata_url'])
mock_load.assert_called_once_with()
mock_get_metadata.assert_called_once_with()
self.assertEqual(expected_logging, snatcher.output)
def test_load_works(self):
self._test_load(expected=True, expected_logging=[])
def test_load_fails(self):
expected_logging = [
'Metadata not found at URL %r'
% self._source._config['metadata_url']
]
self._test_load(expected=False,
expected_logging=expected_logging,
metadata_side_effect=ValueError)
@mock.patch('cloudinit.sources.openstack.httpopenstack.'
'HttpOpenStackSource._path_join')
@mock.patch('cloudinit.url_helper.wait_any_url')
def test__get_data_inaccessible_metadata(self, mock_wait_any_url,
mock_path_join):
mock_wait_any_url.return_value = None
mock_path_join.return_value = mock.sentinel.path_join
msg = "Metadata for url {0} was not accessible in due time"
expected = msg.format(mock.sentinel.path_join)
expected_logging = [
'Getting metadata from: %s' % mock.sentinel.path_join
]
with LogSnatcher('cloudinit.sources.openstack.'
'httpopenstack') as snatcher:
exc = self.assertRaises(exceptions.CloudInitError,
self._source._get_data, 'test')
self.assertEqual(expected, str(exc))
self.assertEqual(expected_logging, snatcher.output)
@mock.patch('cloudinit.sources.openstack.httpopenstack.'
'HttpOpenStackSource._path_join')
@mock.patch('cloudinit.url_helper.wait_any_url')
def test__get_data(self, mock_wait_any_url, mock_path_join):
mock_response = mock.Mock()
response = b"test"
mock_response.contents = response
mock_response.encoding = 'utf-8'
mock_wait_any_url.return_value = (None, mock_response)
mock_path_join.return_value = mock.sentinel.path_join
expected_logging = [
'Getting metadata from: %s' % mock.sentinel.path_join
]
with LogSnatcher('cloudinit.sources.openstack.'
'httpopenstack') as snatcher:
result = self._source._get_data('test')
self.assertEqual(expected_logging, snatcher.output)
self.assertIsInstance(result, base.APIResponse)
self.assertEqual('test', str(result))
self.assertEqual(b'test', result.buffer)

View File

@ -1,115 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import functools
import string
import types
from cloudinit import exceptions
from cloudinit import plugin_finder
from cloudinit.sources import base
from cloudinit.sources import strategy
from cloudinit import tests
class TestDataSourceDiscovery(tests.TestCase):
def setUp(self):
super(TestDataSourceDiscovery, self).setUp()
self._modules = None
@property
def modules(self):
if self._modules:
return self._modules
class Module(types.ModuleType):
def data_sources(self):
return (self, )
def __call__(self):
return self
@property
def __class__(self):
return self
modules = self._modules = list(map(Module, string.ascii_letters))
return modules
@property
def module_iterator(self):
modules = self.modules
class ModuleIterator(plugin_finder.BaseModuleIterator):
def list_modules(self):
return modules + [None, "", 42]
return ModuleIterator(None)
def test_loader_api(self):
# Test that the API of DataSourceLoader is sane
loader = base.DataSourceLoader(
names=[], module_iterator=self.module_iterator,
strategies=[])
all_data_sources = list(loader.all_data_sources())
valid_data_sources = list(loader.valid_data_sources())
self.assertEqual(all_data_sources, self.modules)
self.assertEqual(valid_data_sources, self.modules)
def test_loader_strategies(self):
class OrdStrategy(strategy.BaseSearchStrategy):
def search_data_sources(self, data_sources):
return filter(lambda source: ord(source.__name__) < 100,
data_sources)
class NameStrategy(strategy.BaseSearchStrategy):
def search_data_sources(self, data_sources):
return (source for source in data_sources
if source.__name__ in ('a', 'b', 'c'))
loader = base.DataSourceLoader(
names=[], module_iterator=self.module_iterator,
strategies=(OrdStrategy(), NameStrategy(), ))
valid_data_sources = list(loader.valid_data_sources())
self.assertEqual(len(valid_data_sources), 3)
self.assertEqual([source.__name__ for source in valid_data_sources],
['a', 'b', 'c'])
def test_get_data_source_filtered_by_name(self):
source = base.get_data_source(
names=['a', 'c'],
module_iterator=self.module_iterator.__class__)
self.assertEqual(source.__name__, 'a')
def test_get_data_source_multiple_strategies(self):
class ReversedStrategy(strategy.BaseSearchStrategy):
def search_data_sources(self, data_sources):
return reversed(list(data_sources))
source = base.get_data_source(
names=['a', 'b', 'c'],
module_iterator=self.module_iterator.__class__,
strategies=(ReversedStrategy, ))
self.assertEqual(source.__name__, 'c')
def test_get_data_source_no_data_source(self):
get_data_source = functools.partial(
base.get_data_source,
names=['totallymissing'],
module_iterator=self.module_iterator.__class__)
exc = self.assertRaises(exceptions.CloudInitError,
get_data_source)
self.assertEqual(str(exc), 'No available data source found')
def test_get_data_source_no_name_filtering(self):
source = base.get_data_source(
names=[], module_iterator=self.module_iterator.__class__)
self.assertEqual(source.__name__, 'a')

View File

@ -1,100 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
from cloudinit.sources import strategy
from cloudinit import tests
from cloudinit.tests.util import mock
class TestStrategy(tests.TestCase):
def test_custom_strategy(self):
class CustomStrategy(strategy.BaseSearchStrategy):
def search_data_sources(self, data_sources):
# Return them in reverse order
return list(reversed(data_sources))
data_sources = [mock.sentinel.first, mock.sentinel.second]
instance = CustomStrategy()
sources = instance.search_data_sources(data_sources)
self.assertEqual(sources, [mock.sentinel.second, mock.sentinel.first])
def test_is_datasource_available(self):
class CustomStrategy(strategy.BaseSearchStrategy):
def search_data_sources(self, _):
pass
instance = CustomStrategy()
good_source = mock.Mock()
good_source.load.return_value = True
bad_source = mock.Mock()
bad_source.load.return_value = False
self.assertTrue(instance.is_datasource_available(good_source))
self.assertFalse(instance.is_datasource_available(bad_source))
def test_filter_name_strategy(self):
names = ['first', 'second', 'third']
full_names = names + ['fourth', 'fifth']
sources = [type(name, (object, ), {})() for name in full_names]
instance = strategy.FilterNameStrategy(names)
sources = list(instance.search_data_sources(sources))
self.assertEqual(len(sources), 3)
self.assertEqual([source.__class__.__name__ for source in sources],
names)
def test_serial_search_strategy(self):
def is_available(self, data_source):
return data_source in available_sources
sources = [mock.sentinel.first, mock.sentinel.second,
mock.sentinel.third, mock.sentinel.fourth]
available_sources = [mock.sentinel.second, mock.sentinel.fourth]
with mock.patch('cloudinit.sources.strategy.BaseSearchStrategy.'
'is_datasource_available', new=is_available):
instance = strategy.SerialSearchStrategy()
valid_sources = list(instance.search_data_sources(sources))
self.assertEqual(available_sources, valid_sources)
def test_filter_version_strategy(self):
class SourceV1(object):
def version(self):
return 'first'
class SourceV2(SourceV1):
def version(self):
return 'second'
class SourceV3(object):
def version(self):
return 'third'
sources = [SourceV1(), SourceV2(), SourceV3()]
instance = strategy.FilterVersionStrategy(['third', 'first'])
filtered_sources = sorted(
source.version()
for source in instance.search_data_sources(sources))
self.assertEqual(len(filtered_sources), 2)
self.assertEqual(filtered_sources, ['first', 'third'])
def test_filter_version_strategy_no_versions_given(self):
class SourceV1(object):
def version(self):
return 'first'
sources = [SourceV1()]
instance = strategy.FilterVersionStrategy()
filtered_sources = list(instance.search_data_sources(sources))
self.assertEqual(len(filtered_sources), 0)

View File

@ -1,55 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import contextlib
import os
import shutil
import tempfile
from cloudinit import plugin_finder
from cloudinit.tests import TestCase
from cloudinit.tests import util
class TestPkgutilModuleIterator(TestCase):
@staticmethod
@contextlib.contextmanager
def _create_tmpdir():
tmpdir = tempfile.mkdtemp()
try:
yield tmpdir
finally:
shutil.rmtree(tmpdir)
@contextlib.contextmanager
def _create_package(self):
with self._create_tmpdir() as tmpdir:
path = os.path.join(tmpdir, 'good.py')
with open(path, 'w') as stream:
stream.write('name = 42')
# Make sure this fails.
bad = os.path.join(tmpdir, 'bad.py')
with open(bad, 'w') as stream:
stream.write('import missingmodule')
yield tmpdir
def test_pkgutil_module_iterator(self):
logging_format = ("Could not import the module 'bad' "
"using the search path %r")
with util.LogSnatcher('cloudinit.plugin_finder') as snatcher:
with self._create_package() as tmpdir:
expected_logging = logging_format % tmpdir
iterator = plugin_finder.PkgutilModuleIterator([tmpdir])
modules = list(iterator.list_modules())
self.assertEqual(len(modules), 1)
module = modules[0]
self.assertEqual(module.name, 42)
self.assertEqual(len(snatcher.output), 1)
self.assertEqual(snatcher.output[0], expected_logging)

View File

@ -1,28 +0,0 @@
from cloudinit.registry import DictRegistry
from cloudinit.tests import TestCase
from cloudinit.tests.util import mock
class TestDictRegistry(TestCase):
def test_added_item_included_in_output(self):
registry = DictRegistry()
item_key, item_to_register = 'test_key', mock.Mock()
registry.register_item(item_key, item_to_register)
self.assertEqual({item_key: item_to_register},
registry.registered_items)
def test_registry_starts_out_empty(self):
self.assertEqual({}, DictRegistry().registered_items)
def test_modifying_registered_items_isnt_exposed_to_other_callers(self):
registry = DictRegistry()
registry.registered_items['test_item'] = mock.Mock()
self.assertEqual({}, registry.registered_items)
def test_keys_cannot_be_replaced(self):
registry = DictRegistry()
item_key = 'test_key'
registry.register_item(item_key, mock.Mock())
self.assertRaises(ValueError,
registry.register_item, item_key, mock.Mock())

View File

@ -1,360 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
from cloudinit import reporting
from cloudinit.reporting import handlers
from cloudinit.tests import TestCase
from cloudinit.tests.util import mock
def _fake_registry():
return mock.Mock(registered_items={'a': mock.MagicMock(),
'b': mock.MagicMock()})
class TestReportStartEvent(TestCase):
@mock.patch('cloudinit.reporting.instantiated_handler_registry',
new_callable=_fake_registry)
def test_report_start_event_passes_something_with_as_string_to_handlers(
self, instantiated_handler_registry):
event_name, event_description = 'my_test_event', 'my description'
reporting.report_start_event(event_name, event_description)
expected_string_representation = ': '.join(
['start', event_name, event_description])
for _, handler in (
instantiated_handler_registry.registered_items.items()):
self.assertEqual(1, handler.publish_event.call_count)
event = handler.publish_event.call_args[0][0]
self.assertEqual(expected_string_representation, event.as_string())
class TestReportFinishEvent(TestCase):
def _report_finish_event(self, result=reporting.status.SUCCESS):
event_name, event_description = 'my_test_event', 'my description'
reporting.report_finish_event(
event_name, event_description, result=result)
return event_name, event_description
def assertHandlersPassedObjectWithAsString(
self, handlers, expected_as_string):
for _, handler in handlers.items():
self.assertEqual(1, handler.publish_event.call_count)
event = handler.publish_event.call_args[0][0]
self.assertEqual(expected_as_string, event.as_string())
@mock.patch('cloudinit.reporting.instantiated_handler_registry',
new_callable=_fake_registry)
def test_report_finish_event_passes_something_with_as_string_to_handlers(
self, instantiated_handler_registry):
event_name, event_description = self._report_finish_event()
expected_string_representation = ': '.join(
['finish', event_name, reporting.status.SUCCESS,
event_description])
self.assertHandlersPassedObjectWithAsString(
instantiated_handler_registry.registered_items,
expected_string_representation)
@mock.patch('cloudinit.reporting.instantiated_handler_registry',
new_callable=_fake_registry)
def test_reporting_successful_finish_has_sensible_string_repr(
self, instantiated_handler_registry):
event_name, event_description = self._report_finish_event(
result=reporting.status.SUCCESS)
expected_string_representation = ': '.join(
['finish', event_name, reporting.status.SUCCESS,
event_description])
self.assertHandlersPassedObjectWithAsString(
instantiated_handler_registry.registered_items,
expected_string_representation)
@mock.patch('cloudinit.reporting.instantiated_handler_registry',
new_callable=_fake_registry)
def test_reporting_unsuccessful_finish_has_sensible_string_repr(
self, instantiated_handler_registry):
event_name, event_description = self._report_finish_event(
result=reporting.status.FAIL)
expected_string_representation = ': '.join(
['finish', event_name, reporting.status.FAIL, event_description])
self.assertHandlersPassedObjectWithAsString(
instantiated_handler_registry.registered_items,
expected_string_representation)
def test_invalid_result_raises_attribute_error(self):
self.assertRaises(ValueError, self._report_finish_event, ("BOGUS",))
class TestReportingEvent(TestCase):
def test_as_string(self):
event_type, name, description = 'test_type', 'test_name', 'test_desc'
event = reporting.ReportingEvent(event_type, name, description)
expected_string_representation = ': '.join(
[event_type, name, description])
self.assertEqual(expected_string_representation, event.as_string())
def test_as_dict(self):
event_type, name, desc = 'test_type', 'test_name', 'test_desc'
event = reporting.ReportingEvent(event_type, name, desc)
self.assertEqual(
{'event_type': event_type, 'name': name, 'description': desc},
event.as_dict())
class TestFinishReportingEvent(TestCase):
def test_as_has_result(self):
result = reporting.status.SUCCESS
name, desc = 'test_name', 'test_desc'
event = reporting.FinishReportingEvent(name, desc, result)
ret = event.as_dict()
self.assertTrue('result' in ret)
self.assertEqual(ret['result'], result)
class TestBaseReportingHandler(TestCase):
def test_base_reporting_handler_is_abstract(self):
exc = self.assertRaises(TypeError, handlers.ReportingHandler)
self.assertIn("publish_event", str(exc))
self.assertIn("abstract", str(exc))
class TestLogHandler(TestCase):
@mock.patch.object(reporting.handlers.logging, 'getLogger')
def test_appropriate_logger_used(self, getLogger):
event_type, event_name = 'test_type', 'test_name'
event = reporting.ReportingEvent(event_type, event_name, 'description')
reporting.handlers.LogHandler().publish_event(event)
self.assertEqual(
[mock.call(
'cloudinit.reporting.{0}.{1}'.format(event_type, event_name))],
getLogger.call_args_list)
@mock.patch.object(reporting.handlers.logging, 'getLogger')
def test_single_log_message_at_info_published(self, getLogger):
event = reporting.ReportingEvent('type', 'name', 'description')
reporting.handlers.LogHandler().publish_event(event)
self.assertEqual(1, getLogger.return_value.info.call_count)
@mock.patch.object(reporting.handlers.logging, 'getLogger')
def test_log_message_uses_event_as_string(self, getLogger):
event = reporting.ReportingEvent('type', 'name', 'description')
reporting.handlers.LogHandler().publish_event(event)
self.assertIn(event.as_string(),
getLogger.return_value.info.call_args[0][0])
class TestDefaultRegisteredHandler(TestCase):
def test_log_handler_registered_by_default(self):
registered_items = (
reporting.instantiated_handler_registry.registered_items)
for _, item in registered_items.items():
if isinstance(item, reporting.handlers.LogHandler):
break
else:
self.fail('No reporting LogHandler registered by default.')
class TestReportingConfiguration(TestCase):
@mock.patch.object(reporting, 'instantiated_handler_registry')
def test_empty_configuration_doesnt_add_handlers(
self, instantiated_handler_registry):
reporting.update_configuration({})
self.assertEqual(
0, instantiated_handler_registry.register_item.call_count)
@mock.patch.object(
reporting, 'instantiated_handler_registry', reporting.DictRegistry())
@mock.patch.object(reporting, 'available_handlers')
def test_looks_up_handler_by_type_and_adds_it(self, available_handlers):
handler_type_name = 'test_handler'
handler_cls = mock.Mock()
available_handlers.registered_items = {handler_type_name: handler_cls}
handler_name = 'my_test_handler'
reporting.update_configuration(
{handler_name: {'type': handler_type_name}})
self.assertEqual(
{handler_name: handler_cls.return_value},
reporting.instantiated_handler_registry.registered_items)
@mock.patch.object(
reporting, 'instantiated_handler_registry', reporting.DictRegistry())
@mock.patch.object(reporting, 'available_handlers')
def test_uses_non_type_parts_of_config_dict_as_kwargs(
self, available_handlers):
handler_type_name = 'test_handler'
handler_cls = mock.Mock()
available_handlers.registered_items = {handler_type_name: handler_cls}
extra_kwargs = {'foo': 'bar', 'bar': 'baz'}
handler_config = extra_kwargs.copy()
handler_config.update({'type': handler_type_name})
handler_name = 'my_test_handler'
reporting.update_configuration({handler_name: handler_config})
self.assertEqual(
handler_cls.return_value,
reporting.instantiated_handler_registry.registered_items[
handler_name])
self.assertEqual([mock.call(**extra_kwargs)],
handler_cls.call_args_list)
@mock.patch.object(
reporting, 'instantiated_handler_registry', reporting.DictRegistry())
@mock.patch.object(reporting, 'available_handlers')
def test_handler_config_not_modified(self, available_handlers):
handler_type_name = 'test_handler'
handler_cls = mock.Mock()
available_handlers.registered_items = {handler_type_name: handler_cls}
handler_config = {'type': handler_type_name, 'foo': 'bar'}
expected_handler_config = handler_config.copy()
reporting.update_configuration({'my_test_handler': handler_config})
self.assertEqual(expected_handler_config, handler_config)
@mock.patch.object(
reporting, 'instantiated_handler_registry', reporting.DictRegistry())
@mock.patch.object(reporting, 'available_handlers')
def test_handlers_removed_if_falseish_specified(self, available_handlers):
handler_type_name = 'test_handler'
handler_cls = mock.Mock()
available_handlers.registered_items = {handler_type_name: handler_cls}
handler_name = 'my_test_handler'
reporting.update_configuration(
{handler_name: {'type': handler_type_name}})
self.assertEqual(
1, len(reporting.instantiated_handler_registry.registered_items))
reporting.update_configuration({handler_name: None})
self.assertEqual(
0, len(reporting.instantiated_handler_registry.registered_items))
class TestReportingEventStack(TestCase):
@mock.patch('cloudinit.reporting.report_finish_event')
@mock.patch('cloudinit.reporting.report_start_event')
def test_start_and_finish_success(self, report_start, report_finish):
with reporting.ReportEventStack(name="myname", description="mydesc"):
pass
self.assertEqual(
[mock.call('myname', 'mydesc')], report_start.call_args_list)
self.assertEqual(
[mock.call('myname', 'mydesc', reporting.status.SUCCESS)],
report_finish.call_args_list)
@mock.patch('cloudinit.reporting.report_finish_event')
@mock.patch('cloudinit.reporting.report_start_event')
def test_finish_exception_defaults_fail(self, report_start, report_finish):
name = "myname"
desc = "mydesc"
try:
with reporting.ReportEventStack(name, description=desc):
raise ValueError("This didnt work")
except ValueError:
pass
self.assertEqual([mock.call(name, desc)], report_start.call_args_list)
self.assertEqual(
[mock.call(name, desc, reporting.status.FAIL)],
report_finish.call_args_list)
@mock.patch('cloudinit.reporting.report_finish_event')
@mock.patch('cloudinit.reporting.report_start_event')
def test_result_on_exception_used(self, report_start, report_finish):
name = "myname"
desc = "mydesc"
try:
with reporting.ReportEventStack(
name, desc, result_on_exception=reporting.status.WARN):
raise ValueError("This didnt work")
except ValueError:
pass
self.assertEqual([mock.call(name, desc)], report_start.call_args_list)
self.assertEqual(
[mock.call(name, desc, reporting.status.WARN)],
report_finish.call_args_list)
@mock.patch('cloudinit.reporting.report_start_event')
def test_child_fullname_respects_parent(self, report_start):
parent_name = "topname"
c1_name = "c1name"
c2_name = "c2name"
c2_expected_fullname = '/'.join([parent_name, c1_name, c2_name])
c1_expected_fullname = '/'.join([parent_name, c1_name])
parent = reporting.ReportEventStack(parent_name, "topdesc")
c1 = reporting.ReportEventStack(c1_name, "c1desc", parent=parent)
c2 = reporting.ReportEventStack(c2_name, "c2desc", parent=c1)
with c1:
report_start.assert_called_with(c1_expected_fullname, "c1desc")
with c2:
report_start.assert_called_with(c2_expected_fullname, "c2desc")
@mock.patch('cloudinit.reporting.report_finish_event')
@mock.patch('cloudinit.reporting.report_start_event')
def test_child_result_bubbles_up(self, report_start, report_finish):
parent = reporting.ReportEventStack("topname", "topdesc")
child = reporting.ReportEventStack("c_name", "c_desc", parent=parent)
with parent:
with child:
child.result = reporting.status.WARN
report_finish.assert_called_with(
"topname", "topdesc", reporting.status.WARN)
@mock.patch('cloudinit.reporting.report_finish_event')
def test_message_used_in_finish(self, report_finish):
with reporting.ReportEventStack("myname", "mydesc",
message="mymessage"):
pass
self.assertEqual(
[mock.call("myname", "mymessage", reporting.status.SUCCESS)],
report_finish.call_args_list)
@mock.patch('cloudinit.reporting.report_finish_event')
def test_message_updatable(self, report_finish):
with reporting.ReportEventStack("myname", "mydesc") as c:
c.message = "all good"
self.assertEqual(
[mock.call("myname", "all good", reporting.status.SUCCESS)],
report_finish.call_args_list)
@mock.patch('cloudinit.reporting.report_start_event')
@mock.patch('cloudinit.reporting.report_finish_event')
def test_reporting_disabled_does_not_report_events(
self, report_start, report_finish):
with reporting.ReportEventStack("a", "b", reporting_enabled=False):
pass
self.assertEqual(report_start.call_count, 0)
self.assertEqual(report_finish.call_count, 0)
@mock.patch('cloudinit.reporting.report_start_event')
@mock.patch('cloudinit.reporting.report_finish_event')
def test_reporting_child_default_to_parent(
self, report_start, report_finish):
parent = reporting.ReportEventStack(
"pname", "pdesc", reporting_enabled=False)
child = reporting.ReportEventStack("cname", "cdesc", parent=parent)
with parent:
with child:
pass
pass
self.assertEqual(report_start.call_count, 0)
self.assertEqual(report_finish.call_count, 0)
def test_reporting_event_has_sane_repr(self):
myrep = reporting.ReportEventStack("fooname", "foodesc",
reporting_enabled=True).__repr__()
self.assertIn("fooname", myrep)
self.assertIn("foodesc", myrep)
self.assertIn("True", myrep)
def test_set_invalid_result_raises_value_error(self):
f = reporting.ReportEventStack("myname", "mydesc")
self.assertRaises(ValueError, setattr, f, "result", "BOGUS")
class TestStatusAccess(TestCase):
def test_invalid_status_access_raises_value_error(self):
self.assertRaises(AttributeError, getattr, reporting.status, "BOGUS")

View File

@ -1,47 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
from cloudinit import safeyaml as yaml
from cloudinit.tests import TestCase
import tempfile
class TestSafeYaml(TestCase):
def test_simple(self):
blob = '\nk1: one\nk2: two'
expected = {'k1': "one", 'k2': "two"}
self.assertEqual(yaml.loads(blob), expected)
def test_bogus_raises_exception(self):
badyaml = "1\n 2:"
self.assertRaises(yaml.YAMLError, yaml.loads, badyaml)
def test_unsafe_types(self):
# should not load complex types
unsafe_yaml = "!!python/object:__builtin__.object {}"
self.assertRaises(yaml.YAMLError, yaml.loads, unsafe_yaml)
def test_python_unicode_not_allowed(self):
# python/unicode is not allowed
# in the past this type was allowed, but not now, so explicit test.
blob = "{k1: !!python/unicode 'my unicode', k2: my string}"
self.assertRaises(yaml.YAMLError, yaml.loads, blob)
def test_dumps_returns_string(self):
self.assertTrue(
isinstance(yaml.dumps(867 - 5309), (str,)))
def test_dumps_is_loadable(self):
mydata = {'a': 'hey', 'b': ['bee', 'Bea']}
self.assertEqual(yaml.loads(yaml.dumps(mydata)), mydata)
def test_load(self):
valid_yaml = "foo: bar"
expected = {'foo': 'bar'}
with tempfile.NamedTemporaryFile(mode='w', delete=False) as tmpf:
tmpf.write(valid_yaml)
tmpf.close()
self.assertEqual(yaml.load(tmpf.name), expected)

View File

@ -1,63 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import six
import cloudinit.shell as shell
from cloudinit.tests import TestCase
from cloudinit.tests.util import mock
class TestMain(TestCase):
def test_help_exits_success(self):
with mock.patch('cloudinit.shell.sys.stdout'):
exc = self.assertRaises(
SystemExit, shell.main, args=['cloud-init', '--help'])
self.assertEqual(exc.code, 0)
def test_invalid_arguments_exit_fail(self):
# silence writes that get to stderr
with mock.patch('cloudinit.shell.sys.stderr'):
exc = self.assertRaises(
SystemExit, shell.main, args=['cloud-init', 'bogus_argument'])
self.assertNotEqual(exc.code, 0)
@mock.patch('cloudinit.shell.sys.stdout')
def test_version_shows_cloud_init(self, mock_out_write):
shell.main(args=['cloud-init', 'version'])
write_arg = mock_out_write.write.call_args[0][0]
self.assertTrue(write_arg.startswith('cloud-init'))
@mock.patch('cloudinit.shell.sys.stderr', new_callable=six.StringIO)
def test_no_arguments_shows_usage(self, stderr):
self.assertRaises(SystemExit, shell.main, args=['cloud-init'])
self.assertIn('usage: cloud-init', stderr.getvalue())
@mock.patch('cloudinit.shell.sys.stderr', mock.MagicMock())
def test_no_arguments_exits_2(self):
exc = self.assertRaises(SystemExit, shell.main, args=['cloud-init'])
self.assertEqual(2, exc.code)
@mock.patch('cloudinit.shell.sys.stderr', new_callable=six.StringIO)
def test_no_arguments_shows_error_message(self, stderr):
self.assertRaises(SystemExit, shell.main, args=['cloud-init'])
self.assertIn('cloud-init: error: too few arguments',
stderr.getvalue())
class TestLoggingConfiguration(TestCase):
@mock.patch('cloudinit.shell.sys.stderr', new_callable=six.StringIO)
def test_log_to_console(self, stderr):
shell.main(args=['cloud-init', '--log-to-console', 'version'])
shell.logging.getLogger().info('test log message')
self.assertIn('test log message', stderr.getvalue())
@mock.patch('cloudinit.shell.sys.stderr', new_callable=six.StringIO)
def test_log_to_console_not_default(self, stderr):
shell.main(args=['cloud-init', 'version'])
shell.logging.getLogger().info('test log message')
self.assertNotIn('test log message', stderr.getvalue())

View File

@ -1,137 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import fixtures
import mock
import os
import textwrap
from cloudinit import templater
from cloudinit.tests import TestCase
class TestTemplates(TestCase):
jinja_tmpl = '\n'.join((
"## template:jinja",
"{{a}},{{b}}",
""
))
jinja_params = {'a': '1', 'b': '2'}
jinja_expected = '1,2\n'
def test_render_basic(self):
in_data = textwrap.dedent("""
${b}
c = d
""")
in_data = in_data.strip()
expected_data = textwrap.dedent("""
2
c = d
""")
out_data = templater.basic_render(in_data, {'b': 2})
self.assertEqual(expected_data.strip(), out_data)
def test_render_jinja(self):
c = templater.render_string(self.jinja_tmpl, self.jinja_params)
self.assertEqual(self.jinja_expected, c)
def test_render_jinja_crlf(self):
blob = '\r\n'.join((
"## template:jinja",
"{{a}},{{b}}"))
c = templater.render_string(blob, {"a": 1, "b": 2})
self.assertEqual("1,2", c)
def test_render_default(self):
blob = '''$a,$b'''
c = templater.render_string(blob, {"a": 1, "b": 2})
self.assertEqual("1,2", c)
def test_render_explict_default(self):
blob = '\n'.join(('## template: basic', '$a,$b',))
c = templater.render_string(blob, {"a": 1, "b": 2})
self.assertEqual("1,2", c)
def test_render_basic_deeper(self):
hn = 'myfoohost.yahoo.com'
expected_data = "h=%s\nc=d\n" % hn
in_data = "h=$hostname.canonical_name\nc=d\n"
params = {
"hostname": {
"canonical_name": hn,
},
}
out_data = templater.render_string(in_data, params)
self.assertEqual(expected_data, out_data)
def test_render_basic_no_parens(self):
hn = "myfoohost"
in_data = "h=$hostname\nc=d\n"
expected_data = "h=%s\nc=d\n" % hn
out_data = templater.basic_render(in_data, {'hostname': hn})
self.assertEqual(expected_data, out_data)
def test_render_basic_parens(self):
hn = "myfoohost"
in_data = "h = ${hostname}\nc=d\n"
expected_data = "h = %s\nc=d\n" % hn
out_data = templater.basic_render(in_data, {'hostname': hn})
self.assertEqual(expected_data, out_data)
def test_render_basic2(self):
mirror = "mymirror"
codename = "zany"
in_data = "deb $mirror $codename-updates main contrib non-free"
ex_data = "deb %s %s-updates main contrib non-free" % (mirror,
codename)
out_data = templater.basic_render(
in_data, {'mirror': mirror, 'codename': codename})
self.assertEqual(ex_data, out_data)
def test_render_basic_exception_1(self):
in_data = "h=${foo.bar}"
self.assertRaises(
TypeError, templater.basic_render, in_data, {'foo': [1, 2]})
def test_unknown_renderer_raises_exception(self):
blob = '\n'.join((
"## template:bigfastcat",
"Hellow $name"
""))
self.assertRaises(
ValueError, templater.render_string, blob, {'name': 'foo'})
@mock.patch.object(templater, 'JINJA_AVAILABLE', False)
def test_jinja_without_jinja_raises_exception(self):
blob = '\n'.join((
"## template:jinja",
"Hellow {{name}}"
""))
templater.JINJA_AVAILABLE = False
self.assertRaises(
ValueError, templater.render_string, blob, {'name': 'foo'})
def test_render_from_file(self):
td = self.useFixture(fixtures.TempDir()).path
fname = os.path.join(td, "myfile")
with open(fname, "w") as fp:
fp.write(self.jinja_tmpl)
rendered = templater.render_from_file(fname, self.jinja_params)
self.assertEqual(rendered, self.jinja_expected)
def test_render_to_file(self):
td = self.useFixture(fixtures.TempDir()).path
src = os.path.join(td, "src")
target = os.path.join(td, "target")
with open(src, "w") as fp:
fp.write(self.jinja_tmpl)
templater.render_to_file(src, target, self.jinja_params)
with open(target, "r") as fp:
rendered = fp.read()
self.assertEqual(rendered, self.jinja_expected)

View File

@ -1,138 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import httpretty
from cloudinit.tests import TestCase
from cloudinit.tests.util import mock
from cloudinit import url_helper
class TimeJumpSideEffect(object):
def __init__(self, first_time, remaining_time):
def generator():
yield first_time
while True:
yield remaining_time
self.time = generator()
def __call__(self):
return next(self.time)
class UrlHelperWaitForUrlsTest(TestCase):
@httpretty.activate
def test_url_wait_for(self):
urls_actions = [
("http://www.yahoo.com", (False, False, True)),
("http://www.google.com", (False, False, False)),
]
urls = []
for (url, actions) in urls_actions:
urls.append(url)
for worked in actions:
if worked:
httpretty.register_uri(httpretty.GET,
url, body=b'it worked!')
else:
httpretty.register_uri(httpretty.GET,
url, body=b'no worky',
status=400)
url, response = url_helper.wait_any_url(urls)
self.assertEqual("http://www.yahoo.com", url)
self.assertIsInstance(response, url_helper.RequestsResponse)
self.assertEqual(response.contents, b'it worked!')
@httpretty.activate
@mock.patch.object(
url_helper, 'now', mock.Mock(side_effect=TimeJumpSideEffect(0, 100)))
def test_url_wait_for_no_work(self):
def request_callback(request, uri, headers):
return (400, headers, b"no worky")
urls = [
"http://www.yahoo.com",
"http://www.google.com",
]
for url in urls:
httpretty.register_uri(httpretty.GET,
url, body=request_callback)
self.assertIsNone(url_helper.wait_any_url(urls, max_wait=1))
class UrlHelperFetchTest(TestCase):
@httpretty.activate
def test_url_fetch(self):
httpretty.register_uri(httpretty.GET,
"http://www.yahoo.com",
body=b'it worked!')
resp = url_helper.read_url("http://www.yahoo.com")
self.assertEqual(b"it worked!", resp.contents)
self.assertEqual(url_helper.OK, resp.status_code)
@httpretty.activate
def test_no_protocol_url(self):
body = b'it worked!'
no_proto = 'www.yahoo.com'
httpretty.register_uri(httpretty.GET, "http://" + no_proto, body=body)
resp = url_helper.read_url(no_proto)
self.assertTrue(resp.url.startswith("http://"))
@httpretty.activate
def test_response_has_url(self):
body = b'it worked!'
url = 'http://www.yahoo.com/'
httpretty.register_uri(httpretty.GET, url, body=body)
resp = url_helper.read_url(url)
self.assertEqual(resp.url, url)
self.assertEqual(body, resp.contents)
@httpretty.activate
def test_retry_url_fetch(self):
httpretty.register_uri(httpretty.GET,
"http://www.yahoo.com",
responses=[
httpretty.Response(body=b"no worky",
status=400),
httpretty.Response(body=b"it worked!",
status=200),
])
resp = url_helper.read_url("http://www.yahoo.com", retries=2)
self.assertEqual(b"it worked!", resp.contents)
self.assertEqual(url_helper.OK, resp.status_code)
@httpretty.activate
def test_failed_url_fetch(self):
httpretty.register_uri(httpretty.GET,
"http://www.yahoo.com",
body=b'no worky', status=400)
self.assertRaises(url_helper.UrlError,
url_helper.read_url, "http://www.yahoo.com")
@httpretty.activate
def test_failed_retry_url_fetch(self):
httpretty.register_uri(httpretty.GET,
"http://www.yahoo.com",
responses=[
httpretty.Response(body=b"no worky",
status=400),
httpretty.Response(body=b"no worky",
status=400),
httpretty.Response(body=b"no worky",
status=400),
])
self.assertRaises(url_helper.UrlError,
url_helper.read_url, "http://www.yahoo.com",
retries=2)

View File

@ -1,73 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import logging
import sys
try:
from unittest import mock
except ImportError:
import mock # noqa
_IS_PY26 = sys.version_info[0:2] == (2, 6)
# This is similar with unittest.TestCase.assertLogs from Python 3.4.
class SnatchHandler(logging.Handler):
if _IS_PY26:
# Old style junk is required on 2.6...
def __init__(self, *args, **kwargs):
logging.Handler.__init__(self, *args, **kwargs)
self.output = []
else:
def __init__(self, *args, **kwargs):
super(SnatchHandler, self).__init__(*args, **kwargs)
self.output = []
def emit(self, record):
msg = self.format(record)
self.output.append(msg)
class LogSnatcher(object):
"""A context manager to capture emitted logged messages.
The class can be used as following::
with LogSnatcher('plugins.windows.createuser') as snatcher:
LOG.info("doing stuff")
LOG.info("doing stuff %s", 1)
LOG.warn("doing other stuff")
...
self.assertEqual(snatcher.output,
['INFO:unknown:doing stuff',
'INFO:unknown:doing stuff 1',
'WARN:unknown:doing other stuff'])
"""
@property
def output(self):
"""Get the output of this Snatcher.
The output is a list of log messages, already formatted.
"""
return self._snatch_handler.output
def __init__(self, logger_name):
self._logger_name = logger_name
self._snatch_handler = SnatchHandler()
self._logger = logging.getLogger(self._logger_name)
self._previous_level = self._logger.getEffectiveLevel()
def __enter__(self):
self._logger.setLevel(logging.DEBUG)
self._logger.handlers.append(self._snatch_handler)
return self
def __exit__(self, *args):
self._logger.handlers.remove(self._snatch_handler)
self._logger.setLevel(self._previous_level)

View File

@ -1,307 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import time
try:
from time import monotonic as now
except ImportError: # pragma: nocover
from time import time as now
import requests
from requests import adapters
from requests import exceptions
from requests import structures
# Arg, why does requests vendorize urllib3....
from requests.packages.urllib3 import util as urllib3_util
from six.moves.urllib.parse import quote as urlquote # noqa
from six.moves.urllib.parse import urlparse # noqa
from six.moves.urllib.parse import urlunparse # noqa
from six.moves.http_client import BAD_REQUEST as _BAD_REQUEST
from six.moves.http_client import CONFLICT # noqa
from six.moves.http_client import MULTIPLE_CHOICES as _MULTIPLE_CHOICES
from six.moves.http_client import OK
from cloudinit import logging
from cloudinit import version
SSL_ENABLED = True
try:
import ssl as _ssl # noqa
except ImportError:
SSL_ENABLED = False
LOG = logging.getLogger(__name__)
def _get_base_url(url):
parsed_url = list(urlparse(url, scheme='http'))
parsed_url[2] = parsed_url[3] = parsed_url[4] = parsed_url[5] = ''
return urlunparse(parsed_url)
def _clean_url(url):
parsed_url = list(urlparse(url, scheme='http'))
if not parsed_url[1] and parsed_url[2]:
# Swap these since this seems to be a common
# occurrence when given urls like 'www.google.com'
parsed_url[1] = parsed_url[2]
parsed_url[2] = ''
return urlunparse(parsed_url)
class _Retry(urllib3_util.Retry):
def is_forced_retry(self, method, status_code):
# Allow >= 400 to be tried...
return status_code >= _BAD_REQUEST
def sleep(self):
# The base class doesn't have a way to log what we are doing,
# so replace it with one that does...
backoff = self.get_backoff_time()
if backoff <= 0:
return
else:
LOG.debug("Please wait %s seconds while we wait to try again",
backoff)
time.sleep(backoff)
class RequestsResponse(object):
"""A wrapper for requests responses (that provides common functions).
This exists so that things like StringResponse or FileResponse can
also exist, but with different sources of their response (aka not
just from the requests library).
"""
def __init__(self, response):
self._response = response
@property
def contents(self):
return self._response.content
@property
def url(self):
return self._response.url
def ok(self, redirects_ok=False):
upper = _MULTIPLE_CHOICES
if redirects_ok:
upper = _BAD_REQUEST
return self.status_code >= OK and self.status_code < upper
@property
def headers(self):
return self._response.headers
@property
def status_code(self):
return self._response.status_code
def __str__(self):
return self._response.text
class UrlError(IOError):
def __init__(self, cause, code=None, headers=None):
super(UrlError, self).__init__(str(cause))
self.cause = cause
self.status_code = code
self.headers = headers or {}
def _get_ssl_args(url, ssl_details):
ssl_args = {}
scheme = urlparse(url).scheme
if scheme == 'https' and ssl_details:
if not SSL_ENABLED:
LOG.warn("SSL is not supported, "
"cert. verification can not occur!")
else:
if 'ca_certs' in ssl_details and ssl_details['ca_certs']:
ssl_args['verify'] = ssl_details['ca_certs']
else:
ssl_args['verify'] = True
if 'cert_file' in ssl_details and 'key_file' in ssl_details:
ssl_args['cert'] = [ssl_details['cert_file'],
ssl_details['key_file']]
elif 'cert_file' in ssl_details:
ssl_args['cert'] = str(ssl_details['cert_file'])
return ssl_args
def read_url(url, data=None, timeout=None, retries=0,
headers=None, ssl_details=None,
check_status=True, allow_redirects=True):
"""Fetch a url (or post to one) with the given options.
:param url: url to fetch
:param data:
any data to POST (this switches the request method to POST
instead of GET)
:param timeout: the timeout (in seconds) to wait for a response
:param headers: any headers to provide (and send along) in the request
:param ssl_details:
a dictionary containing any ssl settings, cert_file, ca_certs
and verify are valid entries (and they are only used when the
url provided is https)
:param check_status:
checks that the response status is OK after fetching (this
ensures a exception is raised on non-OK status codes)
:param allow_redirects: enables redirects (or disables them)
:param retries:
maximum number of retries to attempt when fetching the url and
the fetch fails
"""
url = _clean_url(url)
request_args = {
'url': url,
}
request_args.update(_get_ssl_args(url, ssl_details))
request_args['allow_redirects'] = allow_redirects
request_args['method'] = 'GET'
if timeout is not None:
request_args['timeout'] = max(float(timeout), 0)
if data:
request_args['method'] = 'POST'
request_args['data'] = data
if not headers:
headers = structures.CaseInsensitiveDict()
else:
headers = structures.CaseInsensitiveDict(headers)
if 'User-Agent' not in headers:
headers['User-Agent'] = 'Cloud-Init/%s' % (version.version_string())
request_args['headers'] = headers
session = requests.Session()
if retries:
retry = _Retry(total=max(int(retries), 0),
raise_on_redirect=not allow_redirects)
session.mount(_get_base_url(url),
adapters.HTTPAdapter(max_retries=retry))
try:
with session:
response = session.request(**request_args)
if check_status:
response.raise_for_status()
except exceptions.RequestException as e:
if e.response is not None:
raise UrlError(e, code=e.response.status_code,
headers=e.response.headers)
else:
raise UrlError(e)
else:
LOG.debug("Read from %s (%s, %sb)", url, response.status_code,
len(response.content))
return RequestsResponse(response)
def wait_any_url(urls, max_wait=None, timeout=None,
status_cb=None, sleep_time=1,
exception_cb=None):
"""Wait for one of many urls to respond correctly.
:param urls: a list of urls to try
:param max_wait: roughly the maximum time to wait before giving up
:param timeout: the timeout provided to ``read_url``
:param status_cb:
call method with string message when a url is not available
:param exception_cb:
call method with 2 arguments 'msg' (per status_cb) and
'exception', the exception that occurred.
:param sleep_time: how long to sleep before trying each url again
The idea of this routine is to wait for the EC2 metdata service to
come up. On both Eucalyptus and EC2 we have seen the case where
the instance hit the MD before the MD service was up. EC2 seems
to have permenantely fixed this, though.
In openstack, the metadata service might be painfully slow, and
unable to avoid hitting a timeout of even up to 10 seconds or more
(LP: #894279) for a simple GET.
Offset those needs with the need to not hang forever (and block boot)
on a system where cloud-init is configured to look for EC2 Metadata
service but is not going to find one. It is possible that the instance
data host (169.254.169.254) may be firewalled off Entirely for a sytem,
meaning that the connection will block forever unless a timeout is set.
This will return a tuple of the first url which succeeded and the
response object.
"""
start_time = now()
def log_status_cb(msg, exc=None):
LOG.debug(msg)
if not status_cb:
status_cb = log_status_cb
def timeup(max_wait, start_time):
current_time = now()
return ((max_wait <= 0 or max_wait is None) or
(current_time - start_time > max_wait))
loop_n = 0
while True:
# This makes a backoff with the following graph:
#
# https://www.desmos.com/calculator/c8pwjy6wmt
sleep_time = int(loop_n / 5) + 1
for url in urls:
current_time = now()
if loop_n != 0:
if timeup(max_wait, start_time):
break
if (timeout and
(current_time + timeout > (start_time + max_wait))):
# shorten timeout to not run way over max_time
timeout = int((start_time + max_wait) - current_time)
reason = ""
url_exc = None
try:
response = read_url(url, timeout=timeout, check_status=False)
if not response.contents:
reason = "empty response [%s]" % (response.code)
url_exc = UrlError(ValueError(reason), code=response.code,
headers=response.headers)
elif not response.ok():
reason = "bad status code [%s]" % (response.code)
url_exc = UrlError(ValueError(reason), code=response.code,
headers=response.headers)
else:
return url, response
except UrlError as e:
reason = "request error [%s]" % e
url_exc = e
except Exception as e:
reason = "unexpected error [%s]" % e
url_exc = e
current_time = now()
time_taken = int(current_time - start_time)
status_msg = "Calling '%s' failed [%s/%ss]: %s" % (url,
time_taken,
max_wait,
reason)
status_cb(status_msg)
if exception_cb:
exception_cb(msg=status_msg, exception=url_exc)
if timeup(max_wait, start_time):
break
loop_n = loop_n + 1
LOG.debug("Please wait %s seconds while we wait to try again",
sleep_time)
time.sleep(sleep_time)
return None

View File

@ -1,24 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
from cloudinit import logging
LOG = logging.getLogger(__name__)
def load_file(path, encoding='utf8'):
LOG.blather("Loading file from path '%s' (%s)", path, encoding)
with open(path, 'rb') as fh:
return fh.read().decode(encoding)
class abstractclassmethod(classmethod):
"""A backport for abc.abstractclassmethod from Python 3."""
__isabstractmethod__ = True
def __init__(self, func):
func.__isabstractmethod__ = True
super(abstractclassmethod, self).__init__(func)

View File

@ -1,14 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import pkg_resources
try:
from pbr import version as pbr_version
_version_info = pbr_version.VersionInfo('cloudinit')
version_string = _version_info.version_string
except ImportError: # pragma: nocover
_version_info = pkg_resources.get_distribution('cloudinit')
version_string = lambda: _version_info.version

View File

@ -1,7 +0,0 @@
cloud-init Python API Documentation
===================================
.. toctree::
:maxdepth: 2
api/autoindex

View File

@ -1,4 +0,0 @@
Don't put files in here, it's intended only for auto-generated output!
Specifically, files which aren't known to git will be cleaned out of
this directory before the docs are generated by tox.

View File

@ -1,5 +0,0 @@
# Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
extensions = ['sphinx.ext.autodoc']

View File

@ -1,22 +0,0 @@
.. _index:
=====================
Documentation
=====================
.. rubric:: Everything about cloud-init.
Summary
-----------------
`Cloud-init`_ is the *defacto* multi-distribution package that handles
early initialization of a cloud instance.
.. toctree::
:maxdepth: 2
api
.. _Cloud-init: https://launchpad.net/cloud-init

View File

@ -1,17 +0,0 @@
[Unit]
Description=Apply the settings specified in cloud-config
After=network.target syslog.target cloud-config.target
Requires=cloud-config.target
Wants=network.target
[Service]
Type=oneshot
ExecStart=/usr/bin/cloud-init modules --mode=config
RemainAfterExit=yes
TimeoutSec=0
# Output needs to appear in instance console output
StandardOutput=journal+console
[Install]
WantedBy=multi-user.target

View File

@ -1,10 +0,0 @@
# cloud-init normally emits a "cloud-config" upstart event to inform third
# parties that cloud-config is available, which does us no good when we're
# using systemd. cloud-config.target serves as this synchronization point
# instead. Services that would "start on cloud-config" with upstart can
# instead use "After=cloud-config.target" and "Wants=cloud-config.target"
# as appropriate.
[Unit]
Description=Cloud-config availability
Requires=cloud-init-local.service cloud-init.service

View File

@ -1,17 +0,0 @@
[Unit]
Description=Execute cloud user/final scripts
After=network.target syslog.target cloud-config.service rc-local.service
Requires=cloud-config.target
Wants=network.target
[Service]
Type=oneshot
ExecStart=/usr/bin/cloud-init modules --mode=final
RemainAfterExit=yes
TimeoutSec=0
# Output needs to appear in instance console output
StandardOutput=journal+console
[Install]
WantedBy=multi-user.target

View File

@ -1,16 +0,0 @@
[Unit]
Description=Initial cloud-init job (pre-networking)
Wants=local-fs.target
After=local-fs.target
[Service]
Type=oneshot
ExecStart=/usr/bin/cloud-init init --local
RemainAfterExit=yes
TimeoutSec=0
# Output needs to appear in instance console output
StandardOutput=journal+console
[Install]
WantedBy=multi-user.target

View File

@ -1,18 +0,0 @@
[Unit]
Description=Initial cloud-init job (metadata service crawler)
After=local-fs.target network.target cloud-init-local.service
Before=sshd.service sshd-keygen.service
Requires=network.target
Wants=local-fs.target cloud-init-local.service sshd.service sshd-keygen.service
[Service]
Type=oneshot
ExecStart=/usr/bin/cloud-init init
RemainAfterExit=yes
TimeoutSec=0
# Output needs to appear in instance console output
StandardOutput=journal+console
[Install]
WantedBy=multi-user.target

View File

@ -1,64 +0,0 @@
#! /bin/sh
### BEGIN INIT INFO
# Provides: cloud-config
# Required-Start: cloud-init cloud-init-local
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Cloud init modules --mode config
# Description: Cloud configuration initialization
### END INIT INFO
# Authors: Julien Danjou <acid@debian.org>
# Juerg Haefliger <juerg.haefliger@hp.com>
# Thomas Goirand <zigo@debian.org>
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Cloud service"
NAME=cloud-init
DAEMON=/usr/bin/$NAME
DAEMON_ARGS="modules --mode config"
SCRIPTNAME=/etc/init.d/$NAME
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions
if init_is_upstart; then
case "$1" in
stop)
exit 0
;;
*)
exit 1
;;
esac
fi
case "$1" in
start)
log_daemon_msg "Starting $DESC" "$NAME"
$DAEMON ${DAEMON_ARGS}
case "$?" in
0|1) log_end_msg 0 ;;
2) log_end_msg 1 ;;
esac
;;
stop|restart|force-reload)
echo "Error: argument '$1' not supported" >&2
exit 3
;;
*)
echo "Usage: $SCRIPTNAME {start}" >&2
exit 3
;;
esac
:

View File

@ -1,66 +0,0 @@
#! /bin/sh
### BEGIN INIT INFO
# Provides: cloud-final
# Required-Start: $all cloud-config
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Cloud init modules final jobs
# Description: This runs the cloud configuration initialization "final" jobs
# and can be seen as the traditional "rc.local" time for the cloud.
# It runs after all cloud-config jobs are run
### END INIT INFO
# Authors: Julien Danjou <acid@debian.org>
# Juerg Haefliger <juerg.haefliger@hp.com>
# Thomas Goirand <zigo@debian.org>
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Cloud service"
NAME=cloud-init
DAEMON=/usr/bin/$NAME
DAEMON_ARGS="modules --mode final"
SCRIPTNAME=/etc/init.d/$NAME
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions
if init_is_upstart; then
case "$1" in
stop)
exit 0
;;
*)
exit 1
;;
esac
fi
case "$1" in
start)
log_daemon_msg "Starting $DESC" "$NAME"
$DAEMON ${DAEMON_ARGS}
case "$?" in
0|1) log_end_msg 0 ;;
2) log_end_msg 1 ;;
esac
;;
stop|restart|force-reload)
echo "Error: argument '$1' not supported" >&2
exit 3
;;
*)
echo "Usage: $SCRIPTNAME {start}" >&2
exit 3
;;
esac
:

View File

@ -1,64 +0,0 @@
#! /bin/sh
### BEGIN INIT INFO
# Provides: cloud-init
# Required-Start: $local_fs $remote_fs $syslog $network cloud-init-local
# Required-Stop: $remote_fs
# X-Start-Before: sshd
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Cloud init
# Description: Cloud configuration initialization
### END INIT INFO
# Authors: Julien Danjou <acid@debian.org>
# Thomas Goirand <zigo@debian.org>
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Cloud service"
NAME=cloud-init
DAEMON=/usr/bin/$NAME
DAEMON_ARGS="init"
SCRIPTNAME=/etc/init.d/$NAME
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions
if init_is_upstart; then
case "$1" in
stop)
exit 0
;;
*)
exit 1
;;
esac
fi
case "$1" in
start)
log_daemon_msg "Starting $DESC" "$NAME"
$DAEMON ${DAEMON_ARGS}
case "$?" in
0|1) log_end_msg 0 ;;
2) log_end_msg 1 ;;
esac
;;
stop|restart|force-reload)
echo "Error: argument '$1' not supported" >&2
exit 3
;;
*)
echo "Usage: $SCRIPTNAME {start}" >&2
exit 3
;;
esac
:

View File

@ -1,63 +0,0 @@
#! /bin/sh
### BEGIN INIT INFO
# Provides: cloud-init-local
# Required-Start: $local_fs $remote_fs
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Cloud init local
# Description: Cloud configuration initialization
### END INIT INFO
# Authors: Julien Danjou <acid@debian.org>
# Juerg Haefliger <juerg.haefliger@hp.com>
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Cloud service"
NAME=cloud-init
DAEMON=/usr/bin/$NAME
DAEMON_ARGS="init --local"
SCRIPTNAME=/etc/init.d/$NAME
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions
if init_is_upstart; then
case "$1" in
stop)
exit 0
;;
*)
exit 1
;;
esac
fi
case "$1" in
start)
log_daemon_msg "Starting $DESC" "$NAME"
$DAEMON ${DAEMON_ARGS}
case "$?" in
0|1) log_end_msg 0 ;;
2) log_end_msg 1 ;;
esac
;;
stop|restart|force-reload)
echo "Error: argument '$1' not supported" >&2
exit 3
;;
*)
echo "Usage: $SCRIPTNAME {start}" >&2
exit 3
;;
esac
:

View File

@ -1,35 +0,0 @@
#!/bin/sh
# PROVIDE: cloudconfig
# REQUIRE: cloudinit cloudinitlocal
# BEFORE: cloudfinal
. /etc/rc.subr
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
export CLOUD_CFG=/usr/local/etc/cloud/cloud.cfg
name="cloudconfig"
command="/usr/local/bin/cloud-init"
start_cmd="cloudconfig_start"
stop_cmd=":"
rcvar="cloudinit_enable"
start_precmd="cloudinit_override"
start_cmd="cloudconfig_start"
cloudinit_override()
{
# If there exist sysconfig/defaults variable override files use it...
if [ -f /etc/defaults/cloud-init ]; then
. /etc/defaults/cloud-init
fi
}
cloudconfig_start()
{
echo "${command} starting"
${command} modules --mode config
}
load_rc_config $name
run_rc_command "$1"

View File

@ -1,35 +0,0 @@
#!/bin/sh
# PROVIDE: cloudfinal
# REQUIRE: LOGIN cloudinit cloudconfig cloudinitlocal
# REQUIRE: cron mail sshd swaplate
. /etc/rc.subr
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
export CLOUD_CFG=/usr/local/etc/cloud/cloud.cfg
name="cloudfinal"
command="/usr/local/bin/cloud-init"
start_cmd="cloudfinal_start"
stop_cmd=":"
rcvar="cloudinit_enable"
start_precmd="cloudinit_override"
start_cmd="cloudfinal_start"
cloudinit_override()
{
# If there exist sysconfig/defaults variable override files use it...
if [ -f /etc/defaults/cloud-init ]; then
. /etc/defaults/cloud-init
fi
}
cloudfinal_start()
{
echo -n "${command} starting"
${command} modules --mode final
}
load_rc_config $name
run_rc_command "$1"

View File

@ -1,35 +0,0 @@
#!/bin/sh
# PROVIDE: cloudinit
# REQUIRE: FILESYSTEMS NETWORKING cloudinitlocal
# BEFORE: cloudconfig cloudfinal
. /etc/rc.subr
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
export CLOUD_CFG=/usr/local/etc/cloud/cloud.cfg
name="cloudinit"
command="/usr/local/bin/cloud-init"
start_cmd="cloudinit_start"
stop_cmd=":"
rcvar="cloudinit_enable"
start_precmd="cloudinit_override"
start_cmd="cloudinit_start"
cloudinit_override()
{
# If there exist sysconfig/defaults variable override files use it...
if [ -f /etc/defaults/cloud-init ]; then
. /etc/defaults/cloud-init
fi
}
cloudinit_start()
{
echo -n "${command} starting"
${command} init
}
load_rc_config $name
run_rc_command "$1"

View File

@ -1,35 +0,0 @@
#!/bin/sh
# PROVIDE: cloudinitlocal
# REQUIRE: mountcritlocal
# BEFORE: NETWORKING FILESYSTEMS cloudinit cloudconfig cloudfinal
. /etc/rc.subr
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
export CLOUD_CFG=/usr/local/etc/cloud/cloud.cfg
name="cloudinitlocal"
command="/usr/local/bin/cloud-init"
start_cmd="cloudlocal_start"
stop_cmd=":"
rcvar="cloudinit_enable"
start_precmd="cloudinit_override"
start_cmd="cloudlocal_start"
cloudinit_override()
{
# If there exist sysconfig/defaults variable override files use it...
if [ -f /etc/defaults/cloud-init ]; then
. /etc/defaults/cloud-init
fi
}
cloudlocal_start()
{
echo -n "${command} starting"
${command} init --local
}
load_rc_config $name
run_rc_command "$1"

View File

@ -1,13 +0,0 @@
#!/sbin/runscript
depend() {
after cloud-init-local
after cloud-init
before cloud-final
provide cloud-config
}
start() {
cloud-init modules --mode config
eend 0
}

View File

@ -1,11 +0,0 @@
#!/sbin/runscript
depend() {
after cloud-config
provide cloud-final
}
start() {
cloud-init modules --mode final
eend 0
}

View File

@ -1,12 +0,0 @@
#!/sbin/runscript
# add depends for network, dns, fs etc
depend() {
after cloud-init-local
before cloud-config
provide cloud-init
}
start() {
cloud-init init
eend 0
}

View File

@ -1,13 +0,0 @@
#!/sbin/runscript
depend() {
after localmount
after netmount
before cloud-init
provide cloud-init-local
}
start() {
cloud-init init --local
eend 0
}

View File

@ -1,105 +0,0 @@
#!/bin/sh
# Copyright 2012 Yahoo! Inc.
# This file is part of cloud-init. See LICENCE file for license information.
#
# See: http://wiki.debian.org/LSBInitScripts
# See: http://tiny.cc/czvbgw
# See: http://www.novell.com/coolsolutions/feature/15380.html
# Also based on dhcpd in RHEL (for comparison)
### BEGIN INIT INFO
# Provides: cloud-config
# Required-Start: cloud-init cloud-init-local
# Should-Start: $time
# Required-Stop:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: The config cloud-init job
# Description: Start cloud-init and runs the config phase
# and any associated config modules as desired.
### END INIT INFO
# Return values acc. to LSB for all commands but status:
# 0 - success
# 1 - generic or unspecified error
# 2 - invalid or excess argument(s)
# 3 - unimplemented feature (e.g. "reload")
# 4 - user had insufficient privileges
# 5 - program is not installed
# 6 - program is not configured
# 7 - program is not running
# 8--199 - reserved (8--99 LSB, 100--149 distrib, 150--199 appl)
#
# Note that starting an already running service, stopping
# or restarting a not-running service as well as the restart
# with force-reload (in case signaling is not supported) are
# considered a success.
RETVAL=0
prog="cloud-init"
cloud_init="/usr/bin/cloud-init"
conf="/etc/cloud/cloud.cfg"
# If there exist sysconfig/default variable override files use it...
[ -f /etc/sysconfig/cloud-init ] && . /etc/sysconfig/cloud-init
[ -f /etc/default/cloud-init ] && . /etc/default/cloud-init
start() {
[ -x $cloud_init ] || return 5
[ -f $conf ] || return 6
echo -n $"Starting $prog: "
$cloud_init $CLOUDINITARGS modules --mode config
RETVAL=$?
return $RETVAL
}
stop() {
echo -n $"Shutting down $prog: "
# No-op
RETVAL=7
return $RETVAL
}
case "$1" in
start)
start
RETVAL=$?
;;
stop)
stop
RETVAL=$?
;;
restart|try-restart|condrestart)
## Stop the service and regardless of whether it was
## running or not, start it again.
#
## Note: try-restart is now part of LSB (as of 1.9).
## RH has a similar command named condrestart.
start
RETVAL=$?
;;
reload|force-reload)
# It does not support reload
RETVAL=3
;;
status)
echo -n $"Checking for service $prog:"
# Return value is slightly different for the status command:
# 0 - service up and running
# 1 - service dead, but /var/run/ pid file exists
# 2 - service dead, but /var/lock/ lock file exists
# 3 - service not running (unused)
# 4 - service status unknown :-(
# 5--199 reserved (5--99 LSB, 100--149 distro, 150--199 appl.)
RETVAL=3
;;
*)
echo "Usage: $0 {start|stop|status|try-restart|condrestart|restart|force-reload|reload}"
RETVAL=3
;;
esac
exit $RETVAL

View File

@ -1,105 +0,0 @@
#!/bin/sh
# Copyright 2012 Yahoo! Inc.
# This file is part of cloud-init. See LICENCE file for license information.
# See: http://wiki.debian.org/LSBInitScripts
# See: http://tiny.cc/czvbgw
# See: http://www.novell.com/coolsolutions/feature/15380.html
# Also based on dhcpd in RHEL (for comparison)
### BEGIN INIT INFO
# Provides: cloud-final
# Required-Start: $all cloud-config
# Should-Start: $time
# Required-Stop:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: The final cloud-init job
# Description: Start cloud-init and runs the final phase
# and any associated final modules as desired.
### END INIT INFO
# Return values acc. to LSB for all commands but status:
# 0 - success
# 1 - generic or unspecified error
# 2 - invalid or excess argument(s)
# 3 - unimplemented feature (e.g. "reload")
# 4 - user had insufficient privileges
# 5 - program is not installed
# 6 - program is not configured
# 7 - program is not running
# 8--199 - reserved (8--99 LSB, 100--149 distrib, 150--199 appl)
#
# Note that starting an already running service, stopping
# or restarting a not-running service as well as the restart
# with force-reload (in case signaling is not supported) are
# considered a success.
RETVAL=0
prog="cloud-init"
cloud_init="/usr/bin/cloud-init"
conf="/etc/cloud/cloud.cfg"
# If there exist sysconfig/default variable override files use it...
[ -f /etc/sysconfig/cloud-init ] && . /etc/sysconfig/cloud-init
[ -f /etc/default/cloud-init ] && . /etc/default/cloud-init
start() {
[ -x $cloud_init ] || return 5
[ -f $conf ] || return 6
echo -n $"Starting $prog: "
$cloud_init $CLOUDINITARGS modules --mode final
RETVAL=$?
return $RETVAL
}
stop() {
echo -n $"Shutting down $prog: "
# No-op
RETVAL=7
return $RETVAL
}
case "$1" in
start)
start
RETVAL=$?
;;
stop)
stop
RETVAL=$?
;;
restart|try-restart|condrestart)
## Stop the service and regardless of whether it was
## running or not, start it again.
#
## Note: try-restart is now part of LSB (as of 1.9).
## RH has a similar command named condrestart.
start
RETVAL=$?
;;
reload|force-reload)
# It does not support reload
RETVAL=3
;;
status)
echo -n $"Checking for service $prog:"
# Return value is slightly different for the status command:
# 0 - service up and running
# 1 - service dead, but /var/run/ pid file exists
# 2 - service dead, but /var/lock/ lock file exists
# 3 - service not running (unused)
# 4 - service status unknown :-(
# 5--199 reserved (5--99 LSB, 100--149 distro, 150--199 appl.)
RETVAL=3
;;
*)
echo "Usage: $0 {start|stop|status|try-restart|condrestart|restart|force-reload|reload}"
RETVAL=3
;;
esac
exit $RETVAL

View File

@ -1,105 +0,0 @@
#!/bin/sh
# Copyright 2012 Yahoo! Inc.
# This file is part of cloud-init. See LICENCE file for license information.
# See: http://wiki.debian.org/LSBInitScripts
# See: http://tiny.cc/czvbgw
# See: http://www.novell.com/coolsolutions/feature/15380.html
# Also based on dhcpd in RHEL (for comparison)
### BEGIN INIT INFO
# Provides: cloud-init
# Required-Start: $local_fs $network $named $remote_fs cloud-init-local
# Should-Start: $time
# Required-Stop:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: The initial cloud-init job (net and fs contingent)
# Description: Start cloud-init and runs the initialization phase
# and any associated initial modules as desired.
### END INIT INFO
# Return values acc. to LSB for all commands but status:
# 0 - success
# 1 - generic or unspecified error
# 2 - invalid or excess argument(s)
# 3 - unimplemented feature (e.g. "reload")
# 4 - user had insufficient privileges
# 5 - program is not installed
# 6 - program is not configured
# 7 - program is not running
# 8--199 - reserved (8--99 LSB, 100--149 distrib, 150--199 appl)
#
# Note that starting an already running service, stopping
# or restarting a not-running service as well as the restart
# with force-reload (in case signaling is not supported) are
# considered a success.
RETVAL=0
prog="cloud-init"
cloud_init="/usr/bin/cloud-init"
conf="/etc/cloud/cloud.cfg"
# If there exist sysconfig/default variable override files use it...
[ -f /etc/sysconfig/cloud-init ] && . /etc/sysconfig/cloud-init
[ -f /etc/default/cloud-init ] && . /etc/default/cloud-init
start() {
[ -x $cloud_init ] || return 5
[ -f $conf ] || return 6
echo -n $"Starting $prog: "
$cloud_init $CLOUDINITARGS init
RETVAL=$?
return $RETVAL
}
stop() {
echo -n $"Shutting down $prog: "
# No-op
RETVAL=7
return $RETVAL
}
case "$1" in
start)
start
RETVAL=$?
;;
stop)
stop
RETVAL=$?
;;
restart|try-restart|condrestart)
## Stop the service and regardless of whether it was
## running or not, start it again.
#
## Note: try-restart is now part of LSB (as of 1.9).
## RH has a similar command named condrestart.
start
RETVAL=$?
;;
reload|force-reload)
# It does not support reload
RETVAL=3
;;
status)
echo -n $"Checking for service $prog:"
# Return value is slightly different for the status command:
# 0 - service up and running
# 1 - service dead, but /var/run/ pid file exists
# 2 - service dead, but /var/lock/ lock file exists
# 3 - service not running (unused)
# 4 - service status unknown :-(
# 5--199 reserved (5--99 LSB, 100--149 distro, 150--199 appl.)
RETVAL=3
;;
*)
echo "Usage: $0 {start|stop|status|try-restart|condrestart|restart|force-reload|reload}"
RETVAL=3
;;
esac
exit $RETVAL

View File

@ -1,105 +0,0 @@
#!/bin/sh
# Copyright 2012 Yahoo! Inc.
# This file is part of cloud-init. See LICENCE file for license information.
# See: http://wiki.debian.org/LSBInitScripts
# See: http://tiny.cc/czvbgw
# See: http://www.novell.com/coolsolutions/feature/15380.html
# Also based on dhcpd in RHEL (for comparison)
### BEGIN INIT INFO
# Provides: cloud-init-local
# Required-Start: $local_fs $remote_fs
# Should-Start: $time
# Required-Stop:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: The initial cloud-init job (local fs contingent)
# Description: Start cloud-init and runs the initialization phases
# and any associated initial modules as desired.
### END INIT INFO
# Return values acc. to LSB for all commands but status:
# 0 - success
# 1 - generic or unspecified error
# 2 - invalid or excess argument(s)
# 3 - unimplemented feature (e.g. "reload")
# 4 - user had insufficient privileges
# 5 - program is not installed
# 6 - program is not configured
# 7 - program is not running
# 8--199 - reserved (8--99 LSB, 100--149 distrib, 150--199 appl)
#
# Note that starting an already running service, stopping
# or restarting a not-running service as well as the restart
# with force-reload (in case signaling is not supported) are
# considered a success.
RETVAL=0
prog="cloud-init"
cloud_init="/usr/bin/cloud-init"
conf="/etc/cloud/cloud.cfg"
# If there exist sysconfig/default variable override files use it...
[ -f /etc/sysconfig/cloud-init ] && . /etc/sysconfig/cloud-init
[ -f /etc/default/cloud-init ] && . /etc/default/cloud-init
start() {
[ -x $cloud_init ] || return 5
[ -f $conf ] || return 6
echo -n $"Starting $prog: "
$cloud_init $CLOUDINITARGS init --local
RETVAL=$?
return $RETVAL
}
stop() {
echo -n $"Shutting down $prog: "
# No-op
RETVAL=7
return $RETVAL
}
case "$1" in
start)
start
RETVAL=$?
;;
stop)
stop
RETVAL=$?
;;
restart|try-restart|condrestart)
## Stop the service and regardless of whether it was
## running or not, start it again.
#
## Note: try-restart is now part of LSB (as of 1.9).
## RH has a similar command named condrestart.
start
RETVAL=$?
;;
reload|force-reload)
# It does not support reload
RETVAL=3
;;
status)
echo -n $"Checking for service $prog:"
# Return value is slightly different for the status command:
# 0 - service up and running
# 1 - service dead, but /var/run/ pid file exists
# 2 - service dead, but /var/lock/ lock file exists
# 3 - service not running (unused)
# 4 - service status unknown :-(
# 5--199 reserved (5--99 LSB, 100--149 distro, 150--199 appl.)
RETVAL=3
;;
*)
echo "Usage: $0 {start|stop|status|try-restart|condrestart|restart|force-reload|reload}"
RETVAL=3
;;
esac
exit $RETVAL

View File

@ -1,9 +0,0 @@
# cloud-config - Handle applying the settings specified in cloud-config
description "Handle applying cloud-config"
emits cloud-config
start on (filesystem and started rsyslog)
console output
task
exec cloud-init modules --mode=config

View File

@ -1,10 +0,0 @@
# cloud-final.conf - run "final" jobs
# this runs around traditional "rc.local" time.
# and after all cloud-config jobs are run
description "execute cloud user/final scripts"
start on (stopped rc RUNLEVEL=[2345] and stopped cloud-config)
console output
task
exec cloud-init modules --mode=final

View File

@ -1,83 +0,0 @@
# cloud-init-blocknet
# the purpose of this job is
# * to block networking from coming up until cloud-init-nonet has run
# * timeout if they all do not come up in a reasonable amount of time
description "block networking until cloud-init-local"
start on (starting network-interface
or starting network-manager
or starting networking)
stop on stopped cloud-init-local
instance $JOB${INTERFACE:+/}${INTERFACE:-}
export INTERFACE
task
script
set +e # you cannot trap TERM reliably with 'set -e'
SLEEP_CHILD=""
static_network_up() {
local emitted="/run/network/static-network-up-emitted"
# /run/network/static-network-up-emitted is written by
# upstart (via /etc/network/if-up.d/upstart). its presense would
# indicate that static-network-up has already fired.
[ -e "$emitted" -o -e "/var/$emitted" ]
}
msg() {
local uptime="" idle="" msg=""
if [ -r /proc/uptime ]; then
read uptime idle < /proc/uptime
fi
msg="${UPSTART_INSTANCE}${uptime:+[${uptime}]}: $*"
echo "$msg"
}
handle_sigterm() {
# if we received sigterm and static networking is up then it probably
# came from upstart as a result of 'stop on static-network-up'
msg "got sigterm"
if [ -n "$SLEEP_CHILD" ]; then
if ! kill $SLEEP_CHILD 2>/dev/null; then
[ ! -d "/proc/$SLEEP_CHILD" ] ||
msg "hm.. failed to kill sleep pid $SLEEP_CHILD"
fi
fi
msg "stopped"
exit 0
}
dowait() {
msg "blocking $1 seconds"
# all this 'exec -a' does is get me a nicely named process in 'ps'
# ie, 'sleep-block-network-interface.eth1'
if [ -x /bin/bash ]; then
bash -c 'exec -a sleep-block-$1 sleep $2' -- "$UPSTART_INSTANCE" "$1" &
else
sleep "$1" &
fi
SLEEP_CHILD=$!
msg "sleepchild=$SLEEP_CHILD"
wait $SLEEP_CHILD
SLEEP_CHILD=""
}
trap handle_sigterm TERM
if [ -n "$INTERFACE" -a "${INTERFACE#lo}" != "${INTERFACE}" ]; then
msg "ignoring interface ${INTERFACE}";
exit 0;
fi
# static_network_up already occurred
static_network_up && { msg "static_network_up already"; exit 0; }
# local-finished cloud-init-local success or failure
lfin="/run/cloud-init/local-finished"
disable="/etc/cloud/no-blocknet"
[ -f "$lfin" ] && { msg "$lfin found"; exit 0; }
[ -f "$disable" ] && { msg "$disable found"; exit 0; }
dowait 120
msg "gave up waiting for $lfin"
exit 1
end script

View File

@ -1,57 +0,0 @@
# in a lxc container, events for network interfaces do not
# get created or may be missed. This helps cloud-init-nonet along
# by emitting those events if they have not been emitted.
start on container
stop on static-network-up
task
emits net-device-added
console output
script
# if we are inside a container, then we may have to emit the ifup
# events for 'auto' network devices.
set -f
# from /etc/network/if-up.d/upstart
MARK_DEV_PREFIX="/run/network/ifup."
MARK_STATIC_NETWORK_EMITTED="/run/network/static-network-up-emitted"
# if the all static network interfaces are already up, nothing to do
[ -f "$MARK_STATIC_NETWORK_EMITTED" ] && exit 0
# ifquery will exit failure if there is no /run/network directory.
# normally that would get created by one of network-interface.conf
# or networking.conf. But, it is possible that we're running
# before either of those have.
mkdir -p /run/network
# get list of all 'auto' interfaces. if there are none, nothing to do.
auto_list=$(ifquery --list --allow auto 2>/dev/null) || :
[ -z "$auto_list" ] && exit 0
set -- ${auto_list}
[ "$*" = "lo" ] && exit 0
# we only want to emit for interfaces that do not exist, so filter
# out anything that does not exist.
for iface in "$@"; do
[ "$iface" = "lo" ] && continue
# skip interfaces that are already up
[ -f "${MARK_DEV_PREFIX}${iface}" ] && continue
if [ -d /sys/net ]; then
# if /sys is mounted, and there is no /sys/net/iface, then no device
[ -e "/sys/net/$iface" ] && continue
else
# sys wasn't mounted, so just check via 'ifconfig'
ifconfig "$iface" >/dev/null 2>&1 || continue
fi
initctl emit --no-wait net-device-added "INTERFACE=$iface" &&
emitted="$emitted $iface" ||
echo "warn: ${UPSTART_JOB} failed to emit net-device-added INTERFACE=$iface"
done
[ -z "${emitted# }" ] ||
echo "${UPSTART_JOB}: emitted ifup for ${emitted# }"
end script

View File

@ -1,16 +0,0 @@
# cloud-init - the initial cloud-init job
# crawls metadata service, emits cloud-config
start on mounted MOUNTPOINT=/ and mounted MOUNTPOINT=/run
task
console output
script
lfin=/run/cloud-init/local-finished
ret=0
cloud-init init --local || ret=$?
[ -r /proc/uptime ] && read up idle < /proc/uptime || up="N/A"
echo "$ret up $up" > "$lfin"
exit $ret
end script

View File

@ -1,66 +0,0 @@
# cloud-init-no-net
# the purpose of this job is
# * to block running of cloud-init until all network interfaces
# configured in /etc/network/interfaces are up
# * timeout if they all do not come up in a reasonable amount of time
start on mounted MOUNTPOINT=/ and stopped cloud-init-local
stop on static-network-up
task
console output
script
set +e # you cannot trap TERM reliably with 'set -e'
SLEEP_CHILD=""
static_network_up() {
local emitted="/run/network/static-network-up-emitted"
# /run/network/static-network-up-emitted is written by
# upstart (via /etc/network/if-up.d/upstart). its presense would
# indicate that static-network-up has already fired.
[ -e "$emitted" -o -e "/var/$emitted" ]
}
msg() {
local uptime="" idle=""
if [ -r /proc/uptime ]; then
read uptime idle < /proc/uptime
fi
echo "$UPSTART_JOB${uptime:+[${uptime}]}:" "$1"
}
handle_sigterm() {
# if we received sigterm and static networking is up then it probably
# came from upstart as a result of 'stop on static-network-up'
if [ -n "$SLEEP_CHILD" ]; then
if ! kill $SLEEP_CHILD 2>/dev/null; then
[ ! -d "/proc/$SLEEP_CHILD" ] ||
msg "hm.. failed to kill sleep pid $SLEEP_CHILD"
fi
fi
if static_network_up; then
msg "static networking is now up"
exit 0
fi
msg "recieved SIGTERM, networking not up"
exit 2
}
dowait() {
[ $# -eq 2 ] || msg "waiting $1 seconds for network device"
sleep "$1" &
SLEEP_CHILD=$!
wait $SLEEP_CHILD
SLEEP_CHILD=""
}
trap handle_sigterm TERM
# static_network_up already occurred
static_network_up && exit 0
dowait 5 silent
dowait 10
dowait 115
msg "gave up waiting for a network device."
: > /var/lib/cloud/data/no-net
end script

View File

@ -1,9 +0,0 @@
# cloud-init - the initial cloud-init job
# crawls metadata service, emits cloud-config
start on mounted MOUNTPOINT=/ and stopped cloud-init-nonet
task
console output
exec /usr/bin/cloud-init init

View File

@ -1,19 +0,0 @@
# log shutdowns and reboots to the console (/dev/console)
# this is useful for correlating logs
start on runlevel PREVLEVEL=2
task
console output
script
# runlevel(7) says INIT_HALT will be set to HALT or POWEROFF
date=$(date --utc)
case "$RUNLEVEL:$INIT_HALT" in
6:*) mode="reboot";;
0:HALT) mode="halt";;
0:POWEROFF) mode="poweroff";;
0:*) mode="shutdown-unknown";;
esac
{ read seconds idle < /proc/uptime; } 2>/dev/null || :
echo "$date: shutting down for $mode${seconds:+ [up ${seconds%.*}s]}."
end script

View File

@ -1,210 +0,0 @@
#!/usr/bin/python
import os
import shutil
import sys
def find_root():
# expected path is in <top_dir>/packages/
top_dir = os.environ.get("CLOUD_INIT_TOP_D", None)
if top_dir is None:
top_dir = os.path.dirname(
os.path.dirname(os.path.abspath(sys.argv[0])))
if os.path.isfile(os.path.join(top_dir, 'setup.py')):
return os.path.abspath(top_dir)
raise OSError(("Unable to determine where your cloud-init topdir is."
" set CLOUD_INIT_TOP_D?"))
# Use the util functions from cloudinit
sys.path.insert(0, find_root())
from cloudinit import templater
from cloudinit import util
import argparse
# Package names that will showup in requires to what we can actually
# use in our debian 'control' file, this is a translation of the 'requires'
# file pypi package name to a debian/ubuntu package name.
PKG_MP = {
'argparse': 'python-argparse',
'cheetah': 'python-cheetah',
'configobj': 'python-configobj',
'jinja2': 'python-jinja2',
'jsonpatch': 'python-jsonpatch | python-json-patch',
'oauth': 'python-oauth',
'prettytable': 'python-prettytable',
'pyserial': 'python-serial',
'pyyaml': 'python-yaml',
'requests': 'python-requests',
}
DEBUILD_ARGS = ["-S", "-d"]
def write_debian_folder(root, version, revno, append_requires=[]):
deb_dir = util.abs_join(root, 'debian')
os.makedirs(deb_dir)
# Fill in the change log template
templater.render_to_file(util.abs_join(find_root(),
'packages', 'debian', 'changelog.in'),
util.abs_join(deb_dir, 'changelog'),
params={
'version': version,
'revision': revno,
})
# Write out the control file template
cmd = [util.abs_join(find_root(), 'tools', 'read-dependencies')]
(stdout, _stderr) = util.subp(cmd)
pkgs = [p.lower().strip() for p in stdout.splitlines()]
# Map to known packages
requires = append_requires
for p in pkgs:
tgt_pkg = PKG_MP.get(p)
if not tgt_pkg:
raise RuntimeError(("Do not know how to translate pypi dependency"
" %r to a known package") % (p))
else:
requires.append(tgt_pkg)
templater.render_to_file(util.abs_join(find_root(),
'packages', 'debian', 'control.in'),
util.abs_join(deb_dir, 'control'),
params={'requires': requires})
# Just copy the following directly
for base_fn in ['dirs', 'copyright', 'compat', 'rules']:
shutil.copy(util.abs_join(find_root(),
'packages', 'debian', base_fn),
util.abs_join(deb_dir, base_fn))
def main():
parser = argparse.ArgumentParser()
parser.add_argument("-v", "--verbose", dest="verbose",
help=("run verbosely"
" (default: %(default)s)"),
default=False,
action='store_true')
parser.add_argument("--no-cloud-utils", dest="no_cloud_utils",
help=("don't depend on cloud-utils package"
" (default: %(default)s)"),
default=False,
action='store_true')
parser.add_argument("--init-system", dest="init_system",
help=("build deb with INIT_SYSTEM=xxx"
" (default: %(default)s"),
default=os.environ.get("INIT_SYSTEM",
"upstart,systemd"))
for ent in DEBUILD_ARGS:
parser.add_argument(ent, dest="debuild_args", action='append_const',
const=ent, help=("pass through '%s' to debuild" % ent),
default=[])
parser.add_argument("--sign", default=False, action='store_true',
help="sign result. do not pass -us -uc to debuild")
args = parser.parse_args()
if not args.sign:
args.debuild_args.extend(['-us', '-uc'])
os.environ['INIT_SYSTEM'] = args.init_system
capture = True
if args.verbose:
capture = False
with util.tempdir() as tdir:
cmd = [util.abs_join(find_root(), 'tools', 'read-version')]
(sysout, _stderr) = util.subp(cmd)
version = sysout.strip()
cmd = ['bzr', 'revno']
(sysout, _stderr) = util.subp(cmd)
revno = sysout.strip()
# This is really only a temporary archive
# since we will extract it then add in the debian
# folder, then re-archive it for debian happiness
print("Creating a temporary tarball using the 'make-tarball' helper")
cmd = [util.abs_join(find_root(), 'tools', 'make-tarball')]
(sysout, _stderr) = util.subp(cmd)
arch_fn = sysout.strip()
tmp_arch_fn = util.abs_join(tdir, os.path.basename(arch_fn))
shutil.move(arch_fn, tmp_arch_fn)
print("Extracting temporary tarball %r" % (tmp_arch_fn))
cmd = ['tar', '-xvzf', tmp_arch_fn, '-C', tdir]
util.subp(cmd, capture=capture)
extracted_name = tmp_arch_fn[:-len('.tar.gz')]
os.remove(tmp_arch_fn)
xdir = util.abs_join(tdir, 'cloud-init')
shutil.move(extracted_name, xdir)
print("Creating a debian/ folder in %r" % (xdir))
if not args.no_cloud_utils:
append_requires=['cloud-utils | cloud-guest-utils']
else:
append_requires=[]
write_debian_folder(xdir, version, revno, append_requires)
# The naming here seems to follow some debian standard
# so it will whine if it is changed...
tar_fn = "cloud-init_%s~bzr%s.orig.tar.gz" % (version, revno)
print("Archiving the adjusted source into %r" %
(util.abs_join(tdir, tar_fn)))
cmd = ['tar', '-czvf',
util.abs_join(tdir, tar_fn),
'-C', xdir]
cmd.extend(os.listdir(xdir))
util.subp(cmd, capture=capture)
# Copy it locally for reference
shutil.copy(util.abs_join(tdir, tar_fn),
util.abs_join(os.getcwd(), tar_fn))
print("Copied that archive to %r for local usage (if desired)." %
(util.abs_join(os.getcwd(), tar_fn)))
print("Running 'debuild %s' in %r" % (' '.join(args.debuild_args),
xdir))
with util.chdir(xdir):
cmd = ['debuild', '--preserve-envvar', 'INIT_SYSTEM']
if args.debuild_args:
cmd.extend(args.debuild_args)
util.subp(cmd, capture=capture)
link_fn = os.path.join(os.getcwd(), 'cloud-init_all.deb')
link_dsc = os.path.join(os.getcwd(), 'cloud-init.dsc')
for base_fn in os.listdir(os.path.join(tdir)):
full_fn = os.path.join(tdir, base_fn)
if not os.path.isfile(full_fn):
continue
shutil.move(full_fn, base_fn)
print("Wrote %r" % (base_fn))
if base_fn.endswith('_all.deb'):
# Add in the local link
util.del_file(link_fn)
os.symlink(base_fn, link_fn)
print("Linked %r to %r" % (base_fn,
os.path.basename(link_fn)))
if base_fn.endswith('.dsc'):
util.del_file(link_dsc)
os.symlink(base_fn, link_dsc)
print("Linked %r to %r" % (base_fn,
os.path.basename(link_dsc)))
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@ -1,275 +0,0 @@
#!/usr/bin/python
import argparse
import contextlib
import glob
import os
import shutil
import subprocess
import sys
import tempfile
import re
from datetime import datetime
def find_root():
# expected path is in <top_dir>/packages/
top_dir = os.environ.get("CLOUD_INIT_TOP_D", None)
if top_dir is None:
top_dir = os.path.dirname(os.path.dirname(os.path.abspath(sys.argv[0])))
if os.path.isfile(os.path.join(top_dir, 'setup.py')):
return os.path.abspath(top_dir)
raise OSError(("Unable to determine where your cloud-init topdir is."
" set CLOUD_INIT_TOP_D?"))
# Use the util functions from cloudinit
sys.path.insert(0, find_root())
from cloudinit import templater
from cloudinit import util
# Mapping of expected packages to there full name...
# this is a translation of the 'requires'
# file pypi package name to a redhat/fedora package name.
PKG_MP = {
'redhat': {
'argparse': 'python-argparse',
'cheetah': 'python-cheetah',
'jinja2': 'python-jinja2',
'configobj': 'python-configobj',
'jsonpatch': 'python-jsonpatch',
'oauth': 'python-oauth',
'prettytable': 'python-prettytable',
'pyserial': 'pyserial',
'pyyaml': 'PyYAML',
'requests': 'python-requests',
},
'suse': {
'argparse': 'python-argparse',
'cheetah': 'python-cheetah',
'configobj': 'python-configobj',
'jsonpatch': 'python-jsonpatch',
'oauth': 'python-oauth',
'prettytable': 'python-prettytable',
'pyserial': 'python-pyserial',
'pyyaml': 'python-yaml',
'requests': 'python-requests',
}
}
# Subdirectories of the ~/rpmbuild dir
RPM_BUILD_SUBDIRS = ['BUILD', 'RPMS', 'SOURCES', 'SPECS', 'SRPMS']
def get_log_header(version):
# Try to find the version in the tags output
cmd = ['bzr', 'tags']
(stdout, _stderr) = util.subp(cmd)
a_rev = None
for t in stdout.splitlines():
ver, rev = t.split(None)
if ver == version:
a_rev = rev
break
if not a_rev:
return None
# Extract who made that tag as the header
cmd = ['bzr', 'log', '-r%s' % (a_rev), '--timezone=utc']
(stdout, _stderr) = util.subp(cmd)
kvs = {
'comment': version,
}
for line in stdout.splitlines():
if line.startswith('committer:'):
kvs['who'] = line[len('committer:'):].strip()
if line.startswith('timestamp:'):
ts = line[len('timestamp:'):]
ts = ts.strip()
# http://bugs.python.org/issue6641
ts = ts.replace("+0000", '').strip()
ds = datetime.strptime(ts, '%a %Y-%m-%d %H:%M:%S')
kvs['ds'] = ds
return format_change_line(**kvs)
def format_change_line(ds, who, comment=None):
# Rpmbuild seems to be pretty strict about the date format
d = ds.strftime("%a %b %d %Y")
d += " - %s" % (who)
if comment:
d += " - %s" % (comment)
return "* %s" % (d)
def generate_spec_contents(args, tmpl_fn, top_dir, arc_fn):
# Figure out the version and revno
cmd = [util.abs_join(find_root(), 'tools', 'read-version')]
(stdout, _stderr) = util.subp(cmd)
version = stdout.strip()
cmd = ['bzr', 'revno']
(stdout, _stderr) = util.subp(cmd)
revno = stdout.strip()
# Tmpl params
subs = {}
subs['version'] = version
subs['revno'] = revno
subs['release'] = "bzr%s" % (revno)
if args.sub_release is not None:
subs['subrelease'] = "." + str(args.sub_release)
else:
subs['subrelease'] = ''
subs['archive_name'] = arc_fn
cmd = [util.abs_join(find_root(), 'tools', 'read-dependencies')]
(stdout, _stderr) = util.subp(cmd)
pkgs = [p.lower().strip() for p in stdout.splitlines()]
# Map to known packages
requires = []
for p in pkgs:
tgt_pkg = PKG_MP[args.distro].get(p)
if not tgt_pkg:
raise RuntimeError(("Do not know how to translate pypi dependency"
" %r to a known package") % (p))
else:
requires.append(tgt_pkg)
subs['requires'] = requires
# Format a nice changelog (as best as we can)
changelog = util.load_file(util.abs_join(find_root(), 'ChangeLog'))
changelog_lines = []
missing_versions = 0
for line in changelog.splitlines():
if not line.strip():
continue
if re.match(r"^\s*[\d][.][\d][.][\d]:\s*", line):
line = line.strip(":")
header = get_log_header(line)
if not header:
missing_versions += 1
if missing_versions == 1:
# Must be using a new 'dev'/'trunk' release
changelog_lines.append(format_change_line(datetime.now(),
'??'))
else:
sys.stderr.write(("Changelog version line %s does not "
"have a corresponding tag!\n") % (line))
else:
changelog_lines.append(header)
else:
changelog_lines.append(line)
subs['changelog'] = "\n".join(changelog_lines)
if args.boot == 'sysvinit':
subs['sysvinit'] = True
else:
subs['sysvinit'] = False
if args.boot == 'systemd':
subs['systemd'] = True
else:
subs['systemd'] = False
subs['defines'] = ["_topdir %s" % (top_dir)]
subs['init_sys'] = args.boot
subs['patches'] = [os.path.basename(p) for p in args.patches]
return templater.render_from_file(tmpl_fn, params=subs)
def main():
parser = argparse.ArgumentParser()
parser.add_argument("-d", "--distro", dest="distro",
help="select distro (default: %(default)s)",
metavar="DISTRO", default='redhat',
choices=('redhat', 'suse'))
parser.add_argument("-b", "--boot", dest="boot",
help="select boot type (default: %(default)s)",
metavar="TYPE", default='sysvinit',
choices=('sysvinit', 'systemd'))
parser.add_argument("-v", "--verbose", dest="verbose",
help=("run verbosely"
" (default: %(default)s)"),
default=False,
action='store_true')
parser.add_argument('-s', "--sub-release", dest="sub_release",
metavar="RELEASE",
help=("a 'internal' release number to concat"
" with the bzr version number to form"
" the final version number"),
type=int,
default=None)
parser.add_argument("-p", "--patch", dest="patches",
help=("include the following patch when building"),
default=[],
action='append')
args = parser.parse_args()
capture = True
if args.verbose:
capture = False
# Clean out the root dir and make sure the dirs we want are in place
root_dir = os.path.expanduser("~/rpmbuild")
if os.path.isdir(root_dir):
shutil.rmtree(root_dir)
arc_dir = util.abs_join(root_dir, 'SOURCES')
build_dirs = [root_dir, arc_dir]
for dname in RPM_BUILD_SUBDIRS:
build_dirs.append(util.abs_join(root_dir, dname))
build_dirs.sort()
util.ensure_dirs(build_dirs)
# Archive the code
cmd = [util.abs_join(find_root(), 'tools', 'make-tarball')]
(stdout, _stderr) = util.subp(cmd)
archive_fn = stdout.strip()
real_archive_fn = os.path.join(arc_dir, os.path.basename(archive_fn))
shutil.move(archive_fn, real_archive_fn)
print("Archived the code in %r" % (real_archive_fn))
# Form the spec file to be used
tmpl_fn = util.abs_join(find_root(), 'packages',
args.distro, 'cloud-init.spec.in')
contents = generate_spec_contents(args, tmpl_fn, root_dir,
os.path.basename(archive_fn))
spec_fn = util.abs_join(root_dir, 'cloud-init.spec')
util.write_file(spec_fn, contents)
print("Created spec file at %r" % (spec_fn))
print(contents)
for p in args.patches:
util.copy(p, util.abs_join(arc_dir, os.path.basename(p)))
# Now build it!
print("Running 'rpmbuild' in %r" % (root_dir))
cmd = ['rpmbuild', '-ba', spec_fn]
util.subp(cmd, capture=capture)
# Copy the items built to our local dir
globs = []
globs.extend(glob.glob("%s/*.rpm" %
(util.abs_join(root_dir, 'RPMS', 'noarch'))))
globs.extend(glob.glob("%s/*.rpm" %
(util.abs_join(root_dir, 'RPMS', 'x86_64'))))
globs.extend(glob.glob("%s/*.rpm" %
(util.abs_join(root_dir, 'RPMS'))))
globs.extend(glob.glob("%s/*.rpm" %
(util.abs_join(root_dir, 'SRPMS'))))
for rpm_fn in globs:
tgt_fn = util.abs_join(os.getcwd(), os.path.basename(rpm_fn))
shutil.move(rpm_fn, tgt_fn)
print("Wrote out %s package %r" % (args.distro, tgt_fn))
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@ -1,6 +0,0 @@
## This is a cheetah template
cloud-init (${version}~bzr${revision}-1) UNRELEASED; urgency=low
* build
-- Scott Moser <smoser@ubuntu.com> Fri, 16 Dec 2011 11:50:25 -0500

View File

@ -1 +0,0 @@
9

View File

@ -1,36 +0,0 @@
## This is a cheetah template
Source: cloud-init
Section: admin
Priority: optional
Maintainer: Scott Moser <smoser@ubuntu.com>
Build-Depends: debhelper (>= 9),
dh-python,
dh-systemd,
python (>= 2.6.6-3~),
python-nose,
pyflakes,
python-setuptools,
python-selinux,
python-cheetah,
python-mocker,
python-httpretty,
#for $r in $requires
${r},
#end for
XS-Python-Version: all
Standards-Version: 3.9.3
Package: cloud-init
Architecture: all
Depends: procps,
python,
#for $r in $requires
${r},
#end for
python-software-properties | software-properties-common,
\${misc:Depends},
Recommends: sudo
XB-Python-Version: \${python:Versions}
Description: Init scripts for cloud instances
Cloud instances need special scripts to run during initialisation
to retrieve and install ssh keys and to let the user run various scripts.

View File

@ -1,41 +0,0 @@
Format-Specification: http://svn.debian.org/wsvn/dep/web/deps/dep5.mdwn?op=file&rev=135
Name: cloud-init
Maintainer: Scott Moser <scott.moser@canonical.com>
Source: https://launchpad.net/cloud-init
Upstream Author: Scott Moser <smoser@canonical.com>
Soren Hansen <soren@canonical.com>
Chuck Short <chuck.short@canonical.com>
Copyright: 2010, Canonical Ltd.
License: GPL-3 or Apache-2.0
License: GPL-3
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License version 3, as
published by the Free Software Foundation.
.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
.
The complete text of the GPL version 3 can be seen in
/usr/share/common-licenses/GPL-3.
License: Apache-2
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
http://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
On Debian-based systems the full text of the Apache version 2.0 license
can be found in `/usr/share/common-licenses/Apache-2.0'.

Some files were not shown because too many files have changed in this diff Show More