Retire Sahara: remove repo content

Sahara project is retiring
- https://review.opendev.org/c/openstack/governance/+/919374

this commit remove the content of this project repo

Depends-On: https://review.opendev.org/c/openstack/project-config/+/919376
Change-Id: I4ff53b361a24b624048ba013861c7fcf51997010
This commit is contained in:
Ghanshyam Mann 2024-05-10 17:27:40 -07:00
parent 1d0bdedd51
commit bbe243e2bf
133 changed files with 8 additions and 16502 deletions

28
.gitignore vendored
View File

@ -1,28 +0,0 @@
*.py[co]
*.egg
*.egg-info
dist
build
eggs
parts
var
sdist
develop-eggs
.installed.cfg
pip-log.txt
.tox
*.mo
.mr.developer.cfg
.DS_Store
Thumbs.db
.venv
.idea
out
target
*.iml
*.ipr
*.iws
*.db
.coverage
ChangeLog
AUTHORS

View File

@ -1,4 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>
Ivan Berezovskiy <iberezovskiy@mirantis.com>

View File

@ -1,40 +0,0 @@
- project:
templates:
- check-requirements
check:
jobs:
- sahara-extra-build-artifacts:
voting: false
- openstack-tox-pep8
gate:
jobs:
- sahara-extra-build-artifacts:
voting: false
- openstack-tox-pep8
post:
jobs:
- sahara-extra-publish-artifacts
- publish-openstack-python-branch-tarball
- job:
name: sahara-extra-build-artifacts
description: |
Build the artifacts used by Sahara clusters and jobs.
parent: unittests
timeout: 3600
required-projects:
- openstack/sahara-extra
run: playbooks/build-artifacts/run.yaml
- job:
name: sahara-extra-publish-artifacts
description: |
Build the artifacts used by Sahara clusters and jobs
and publish them on tarballs.openstack.org
parent: publish-openstack-artifacts
timeout: 3600
final: true
required-projects:
- openstack/sahara-extra
run: playbooks/build-artifacts/run.yaml
post-run: playbooks/build-artifacts/post.yaml

View File

@ -1,19 +0,0 @@
The source repository for this project can be found at:
https://opendev.org/openstack/sahara-extra
Pull requests submitted through GitHub are not monitored.
To start contributing to OpenStack, follow the steps in the contribution guide
to set up and use Gerrit:
https://docs.openstack.org/contributors/code-and-documentation/quick-start.html
Bugs should be filed on Storyboard:
https://storyboard.openstack.org/#!/project/openstack/sahara-extra
For more specific information about contributing to this repository, see the
sahara contributor guide:
https://docs.openstack.org/sahara/latest/contributor/contributing.html

View File

@ -1,12 +0,0 @@
Sahara Style Commandments
=========================
- Step 1: Read the OpenStack Style Commandments
https://docs.openstack.org/hacking/latest/
- Step 2: Read on
Sahara Specific Commandments
----------------------------
None so far

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,2 +0,0 @@
include README.md
graft elements

View File

@ -1,21 +1,10 @@
========================
Team and repository tags
========================
This project is no longer maintained.
.. image:: https://governance.openstack.org/tc/badges/sahara-extra.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
.. Change things from this point on
OpenStack Data Processing ("Sahara") extra repo
===============================================
Sahara-extra is place for Sahara components not included into the main `Sahara repository <https://opendev.org/openstack/sahara>`_
Here is the list of components:
* Sources for Swift filesystem implementation for Hadoop: https://opendev.org/openstack/sahara-extra/src/branch/master/hadoop-swiftfs/README.rst
* Sources for main function wrapper that adapt for oozie: https://opendev.org/openstack/sahara-extra/src/branch/master/edp-adapt-for-oozie/README.rst
* Sources for main function wrapper that adapt for spark: https://opendev.org/openstack/sahara-extra/src/branch/master/edp-adapt-for-spark/README.rst
* `Diskimage-builder <https://opendev.org/openstack/diskimage-builder>`_ elements moved to the new repo: https://opendev.org/openstack/sahara-image-elements
* Tools for building artifacts: https://opendev.org/openstack/sahara-extra/src/branch/master/tools
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

Binary file not shown.

Binary file not shown.

View File

@ -1,24 +0,0 @@
=======================
Sources for main function wrapper that adapt for oozie
=======================
In order to pass configurations to MapReduce Application through oozie,
it is necessary to add the following code.
(https://opendev.org/openstack/sahara-tests/src/branch/master/sahara_tests/scenario/defaults/edp-examples/edp-java/README.rst)
// This will add properties from the <configuration> tag specified
// in the Oozie workflow. For java actions, Oozie writes the
// configuration values to a file pointed to by ooze.action.conf.xml
conf.addResource(new Path("file:///",
System.getProperty("oozie.action.conf.xml")));
This wrapper adds a above configuration file to a default resources and
invoke actual main function.
And this wrapper provides workaround for oozie's System.exit problem.
(https://oozie.apache.org/docs/4.0.0/WorkflowFunctionalSpec.html#a3.2.7_Java_Action)
In caller of oozie, System.exit is converted to exception.
The application can call System.exit multiple times.
This wrapper stores the argument of System.exit called in first.
And return stored value if System.exit is called multiple times.

View File

@ -1,55 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.openstack.sahara.edp</groupId>
<artifactId>edp-main-wrapper</artifactId>
<version>1.0.0-SNAPSHOT</version>
<name>EDP Java Action Main Wrapper for oozie</name>
<packaging>jar</packaging>
<properties>
<file.encoding>UTF-8</file.encoding>
<downloadSources>true</downloadSources>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.6</source>
<target>1.6</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<configuration>
<configLocation>file://${basedir}/../hadoop-swiftfs/checkstyle.xml</configLocation>
<failOnViolation>false</failOnViolation>
<format>xml</format>
<format>html</format>
</configuration>
</plugin>
</plugins>
</build>
</project>

View File

@ -1,91 +0,0 @@
package org.openstack.sahara.edp;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.security.Permission;
import java.util.Arrays;
public class MainWrapper {
public static void main(String[] args) throws Throwable {
// Load oozie configuration file
String actionConf = System.getProperty("oozie.action.conf.xml");
if (actionConf != null) {
Class<?> configClass
= Class.forName("org.apache.hadoop.conf.Configuration");
Method method = configClass.getMethod("addDefaultResource", String.class);
method.invoke(null, "action.xml");
}
SecurityManager originalSecurityManager = System.getSecurityManager();
WrapperSecurityManager newSecurityManager
= new WrapperSecurityManager(originalSecurityManager);
System.setSecurityManager(newSecurityManager);
Class<?> mainClass = Class.forName(args[0]);
Method mainMethod = mainClass.getMethod("main", String[].class);
String[] newArgs = Arrays.copyOfRange(args, 1, args.length);
Throwable exception = null;
try {
mainMethod.invoke(null, (Object) newArgs);
} catch (InvocationTargetException e) {
if (!newSecurityManager.getExitInvoked()) {
exception = e.getTargetException();
}
}
System.setSecurityManager(originalSecurityManager);
if (exception != null) {
throw exception;
}
if (newSecurityManager.getExitInvoked()) {
System.exit(newSecurityManager.getExitCode());
}
}
static class WrapperSecurityManager extends SecurityManager {
private static boolean exitInvoked = false;
private static int firstExitCode;
private SecurityManager securityManager;
public WrapperSecurityManager(SecurityManager securityManager) {
this.securityManager = securityManager;
}
@Override
public void checkPermission(Permission perm, Object context) {
if (securityManager != null) {
// check everything with the original SecurityManager
securityManager.checkPermission(perm, context);
}
}
@Override
public void checkPermission(Permission perm) {
if (securityManager != null) {
// check everything with the original SecurityManager
securityManager.checkPermission(perm);
}
}
@Override
public void checkExit(int status) throws SecurityException {
if (!exitInvoked) {
// save first System.exit status code
exitInvoked = true;
firstExitCode = status;
}
throw new SecurityException("Intercepted System.exit(" + status + ")");
}
public static boolean getExitInvoked() {
return exitInvoked;
}
public static int getExitCode() {
return firstExitCode;
}
}
}

View File

@ -1,14 +0,0 @@
===========================================
Sources for main function wrapper for Spark
===========================================
The Hadoop configuration for a Spark job must be modified if
the Spark job is going to access Swift paths. Specifically,
the Hadoop configuration must contain values that allow
the job to authenticate to the Swift service.
This wrapper adds a specified xml file to the default Hadoop
Configuration resource list and then calls the specified
main class. Any necessary Hadoop configuration values can
be added to the xml file. This allows the main class to
be run and access Swift paths without alteration.

View File

@ -1,54 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.openstack.sahara.edp</groupId>
<artifactId>edp-spark-wrapper</artifactId>
<version>1.0.0-SNAPSHOT</version>
<name>EDP Wrapper for Spark</name>
<packaging>jar</packaging>
<properties>
<file.encoding>UTF-8</file.encoding>
<downloadSources>true</downloadSources>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.6</source>
<target>1.6</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<configuration>
<configLocation>file://${basedir}/../hadoop-swiftfs/checkstyle.xml</configLocation>
<failOnViolation>false</failOnViolation>
<format>xml</format>
<format>html</format>
</configuration>
</plugin>
</plugins>
</build>
</project>

View File

@ -1,22 +0,0 @@
package org.openstack.sahara.edp;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.security.Permission;
import java.util.Arrays;
public class SparkWrapper {
public static void main(String[] args) throws Throwable {
Class<?> configClass
= Class.forName("org.apache.hadoop.conf.Configuration");
Method method = configClass.getMethod("addDefaultResource", String.class);
method.invoke(null, args[0]);
Class<?> mainClass = Class.forName(args[1]);
Method mainMethod = mainClass.getMethod("main", String[].class);
String[] newArgs = Arrays.copyOfRange(args, 2, args.length);
mainMethod.invoke(null, (Object) newArgs);
}
}

View File

@ -1,5 +0,0 @@
EDP Examples
============
All files from this directory have been moved to new
sahara-tests repository: https://opendev.org/openstack/sahara-tests

View File

@ -1,17 +0,0 @@
======================================================
Sources for Swift filesystem implementation for Hadoop
======================================================
These sources were originally published at
https://issues.apache.org/jira/secure/attachment/12583703/HADOOP-8545-033.patch
The sources were obtained by running "patch" command. All the files related to
Hadoop-common were skipped during patching.
Changes were made after patching:
* pom.xml was updated to use hadoop-core 1.1.2 dependency and adds hadoop2
profile
* removed dependency on 2.x hadoop in code (@Override and isDirectory()
-> isDir())
* removed Hadoop 2.X tests
There are no unit-tests, only integration.

View File

@ -1,189 +0,0 @@
<?xml version="1.0"?>
<!DOCTYPE module PUBLIC
"-//Puppy Crawl//DTD Check Configuration 1.2//EN"
"http://www.puppycrawl.com/dtds/configuration_1_2.dtd">
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!--
Checkstyle configuration for Hadoop that is based on the sun_checks.xml file
that is bundled with Checkstyle and includes checks for:
- the Java Language Specification at
http://java.sun.com/docs/books/jls/second_edition/html/index.html
- the Sun Code Conventions at http://java.sun.com/docs/codeconv/
- the Javadoc guidelines at
http://java.sun.com/j2se/javadoc/writingdoccomments/index.html
- the JDK Api documentation http://java.sun.com/j2se/docs/api/index.html
- some best practices
Checkstyle is very configurable. Be sure to read the documentation at
http://checkstyle.sf.net (or in your downloaded distribution).
Most Checks are configurable, be sure to consult the documentation.
To completely disable a check, just comment it out or delete it from the file.
Finally, it is worth reading the documentation.
-->
<module name="Checker">
<!-- Checks that a package.html file exists for each package. -->
<!-- See http://checkstyle.sf.net/config_javadoc.html#PackageHtml -->
<module name="JavadocPackage"/>
<!-- Checks whether files end with a new line. -->
<!-- See http://checkstyle.sf.net/config_misc.html#NewlineAtEndOfFile -->
<!-- module name="NewlineAtEndOfFile"/-->
<!-- Checks that property files contain the same keys. -->
<!-- See http://checkstyle.sf.net/config_misc.html#Translation -->
<module name="Translation"/>
<module name="FileLength"/>
<module name="FileTabCharacter"/>
<module name="TreeWalker">
<!-- Checks for Javadoc comments. -->
<!-- See http://checkstyle.sf.net/config_javadoc.html -->
<module name="JavadocType">
<property name="scope" value="public"/>
<property name="allowMissingParamTags" value="true"/>
</module>
<module name="JavadocStyle"/>
<!-- Checks for Naming Conventions. -->
<!-- See http://checkstyle.sf.net/config_naming.html -->
<module name="ConstantName"/>
<module name="LocalFinalVariableName"/>
<module name="LocalVariableName"/>
<module name="MemberName"/>
<module name="MethodName"/>
<module name="PackageName"/>
<module name="ParameterName"/>
<module name="StaticVariableName"/>
<module name="TypeName"/>
<!-- Checks for Headers -->
<!-- See http://checkstyle.sf.net/config_header.html -->
<!-- <module name="Header"> -->
<!-- The follow property value demonstrates the ability -->
<!-- to have access to ANT properties. In this case it uses -->
<!-- the ${basedir} property to allow Checkstyle to be run -->
<!-- from any directory within a project. See property -->
<!-- expansion, -->
<!-- http://checkstyle.sf.net/config.html#properties -->
<!-- <property -->
<!-- name="headerFile" -->
<!-- value="${basedir}/java.header"/> -->
<!-- </module> -->
<!-- Following interprets the header file as regular expressions. -->
<!-- <module name="RegexpHeader"/> -->
<!-- Checks for imports -->
<!-- See http://checkstyle.sf.net/config_import.html -->
<module name="IllegalImport"/> <!-- defaults to sun.* packages -->
<module name="RedundantImport"/>
<module name="UnusedImports"/>
<!-- Checks for Size Violations. -->
<!-- See http://checkstyle.sf.net/config_sizes.html -->
<module name="LineLength"/>
<module name="MethodLength"/>
<module name="ParameterNumber"/>
<!-- Checks for whitespace -->
<!-- See http://checkstyle.sf.net/config_whitespace.html -->
<module name="EmptyForIteratorPad"/>
<module name="MethodParamPad"/>
<module name="NoWhitespaceAfter"/>
<module name="NoWhitespaceBefore"/>
<module name="ParenPad"/>
<module name="TypecastParenPad"/>
<module name="WhitespaceAfter">
<property name="tokens" value="COMMA, SEMI"/>
</module>
<module name="WhitespaceAround">
<property name="tokens" value="ASSIGN"/>
</module>
<!-- Modifier Checks -->
<!-- See http://checkstyle.sf.net/config_modifiers.html -->
<module name="ModifierOrder"/>
<module name="RedundantModifier"/>
<!-- Checks for blocks. You know, those {}'s -->
<!-- See http://checkstyle.sf.net/config_blocks.html -->
<module name="AvoidNestedBlocks"/>
<module name="EmptyBlock"/>
<module name="LeftCurly"/>
<module name="NeedBraces"/>
<module name="RightCurly"/>
<!-- Checks for common coding problems -->
<!-- See http://checkstyle.sf.net/config_coding.html -->
<!-- module name="AvoidInlineConditionals"/-->
<!--<module name="DoubleCheckedLocking"/>-->
<module name="EmptyStatement"/>
<module name="EqualsHashCode"/>
<module name="HiddenField">
<property name="ignoreConstructorParameter" value="true"/>
</module>
<module name="IllegalInstantiation"/>
<module name="InnerAssignment"/>
<module name="MissingSwitchDefault"/>
<module name="RedundantThrows"/>
<module name="SimplifyBooleanExpression"/>
<module name="SimplifyBooleanReturn"/>
<!-- Checks for class design -->
<!-- See http://checkstyle.sf.net/config_design.html -->
<module name="FinalClass"/>
<module name="HideUtilityClassConstructor"/>
<module name="InterfaceIsType"/>
<module name="VisibilityModifier"/>
<!-- Miscellaneous other checks. -->
<!-- See http://checkstyle.sf.net/config_misc.html -->
<module name="ArrayTypeStyle"/>
<module name="Indentation">
<property name="basicOffset" value="2" />
<property name="caseIndent" value="0" />
</module>
<module name="TodoComment"/>
<module name="UpperEll"/>
</module>
</module>

View File

@ -1,194 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-openstack</artifactId>
<version>3.0.0-SNAPSHOT</version>
<name>Apache Hadoop OpenStack support</name>
<description>
This module contains code to support integration with OpenStack.
Currently this consists of a filesystem client to read data from
and write data to an OpenStack Swift object store.
</description>
<packaging>jar</packaging>
<properties>
<targetJavaVersion>1.6</targetJavaVersion>
<sourceJavaVersion>1.6</sourceJavaVersion>
<file.encoding>UTF-8</file.encoding>
<downloadSources>true</downloadSources>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<hadoop.artifactid>hadoop-core</hadoop.artifactid>
<hadoop.version>1.2.1</hadoop.version>
</properties>
<profiles>
<profile>
<id>tests-off</id>
<activation>
<file>
<missing>src/test/resources/auth-keys.xml</missing>
</file>
</activation>
<properties>
<maven.test.skip>true</maven.test.skip>
</properties>
</profile>
<profile>
<id>tests-on</id>
<activation>
<file>
<exists>src/test/resources/auth-keys.xml</exists>
</file>
</activation>
<properties>
<maven.test.skip>false</maven.test.skip>
</properties>
</profile>
<profile>
<id>hadoop2</id>
<properties>
<hadoop.artifactid>hadoop-common</hadoop.artifactid>
<hadoop.version>2.4.1</hadoop.version>
</properties>
</profile>
<profile>
<id>hadoop3</id>
<properties>
<hadoop.artifactid>hadoop-common</hadoop.artifactid>
<hadoop.version>3.0.1</hadoop.version>
<targetJavaVersion>1.8</targetJavaVersion>
<sourceJavaVersion>1.8</sourceJavaVersion>
</properties>
</profile>
</profiles>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-site-plugin</artifactId>
<dependencies>
<dependency>
<groupId>org.apache.maven.doxia</groupId>
<artifactId>doxia-module-markdown</artifactId>
<version>1.3</version>
</dependency>
</dependencies>
<configuration>
<inputEncoding>UTF-8</inputEncoding>
<outputEncoding>UTF-8</outputEncoding>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-project-info-reports-plugin</artifactId>
<configuration>
<dependencyDetailsEnabled>false</dependencyDetailsEnabled>
<dependencyLocationsEnabled>false</dependencyLocationsEnabled>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>${sourceJavaVersion}</source>
<target>${targetJavaVersion}</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<configuration>
<configLocation>file://${basedir}/checkstyle.xml</configLocation>
<failOnViolation>false</failOnViolation>
<format>xml</format>
<format>html</format>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>${hadoop.artifactid}</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-asl</artifactId>
<version>1.9.12</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-core-asl</artifactId>
<version>1.9.12</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.2.5</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>commons-httpclient</groupId>
<artifactId>commons-httpclient</artifactId>
<version>3.1</version>
<scope>compile</scope>
</dependency>
<!-- Used for loading test resources and converting a File to byte[] -->
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.4</version>
<scope>compile</scope>
</dependency>
<!-- Used for mocking dependencies -->
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-all</artifactId>
<version>1.8.5</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<scope>test</scope>
<version>14.0.1</version>
</dependency>
</dependencies>
</project>

View File

@ -1,66 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
import org.codehaus.jackson.annotate.JsonProperty;
/**
* Class that represents authentication request to Openstack Keystone.
* Contains basic authentication information.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS
*/
public class ApiKeyAuthenticationRequest extends AuthenticationRequestV2 {
/**
* Credentials for login
*/
private ApiKeyCredentials apiKeyCredentials;
/**
* API key auth
* @param tenantName tenant
* @param apiKeyCredentials credentials
*/
public ApiKeyAuthenticationRequest(String tenantName, ApiKeyCredentials apiKeyCredentials) {
this.tenantName = tenantName;
this.apiKeyCredentials = apiKeyCredentials;
}
/**
* @return credentials for login into Keystone
*/
@JsonProperty("RAX-KSKEY:apiKeyCredentials")
public ApiKeyCredentials getApiKeyCredentials() {
return apiKeyCredentials;
}
/**
* @param apiKeyCredentials credentials for login into Keystone
*/
public void setApiKeyCredentials(ApiKeyCredentials apiKeyCredentials) {
this.apiKeyCredentials = apiKeyCredentials;
}
@Override
public String toString() {
return "Auth as " +
"tenant '" + tenantName + "' "
+ apiKeyCredentials;
}
}

View File

@ -1,87 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* Describes credentials to log in Swift using Keystone authentication.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class ApiKeyCredentials {
/**
* user login
*/
private String username;
/**
* user password
*/
private String apikey;
/**
* default constructor
*/
public ApiKeyCredentials() {
}
/**
* @param username user login
* @param apikey user api key
*/
public ApiKeyCredentials(String username, String apikey) {
this.username = username;
this.apikey = apikey;
}
/**
* @return user api key
*/
public String getApiKey() {
return apikey;
}
/**
* @param apikey user api key
*/
public void setApiKey(String apikey) {
this.apikey = apikey;
}
/**
* @return login
*/
public String getUsername() {
return username;
}
/**
* @param username login
*/
public void setUsername(String username) {
this.username = username;
}
@Override
public String toString() {
return "user " +
"'" + username + '\'' +
" with key of length " + ((apikey == null) ? 0 : apikey.length());
}
}

View File

@ -1,36 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* Class that represents authentication request to Openstack Keystone.
* Contains basic authentication information.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class AuthenticationRequest {
public AuthenticationRequest() {
}
@Override
public String toString() {
return "AuthenticationRequest";
}
}

View File

@ -1,57 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* Class that represents authentication request to Openstack Keystone.
* Contains basic authentication information.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class AuthenticationRequestV2 extends AuthenticationRequest {
/**
* tenant name
*/
protected String tenantName;
public AuthenticationRequestV2() {
}
/**
* @return tenant name for Keystone authorization
*/
public String getTenantName() {
return tenantName;
}
/**
* @param tenantName tenant name for authorization
*/
public void setTenantName(String tenantName) {
this.tenantName = tenantName;
}
@Override
public String toString() {
return "AuthenticationRequestV2{" +
"tenantName='" + tenantName + '\'' +
'}';
}
}

View File

@ -1,32 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* Class that represents authentication request to Openstack Keystone.
* Contains basic authentication information.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class AuthenticationRequestV3 extends AuthenticationRequest {
public AuthenticationRequestV3() {
}
}

View File

@ -1,59 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* This class is used for correct hierarchy mapping of
* Keystone authentication model and java code.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class AuthenticationRequestWrapper {
/**
* authentication request
*/
private AuthenticationRequest auth;
/**
* default constructor used for json parsing
*/
public AuthenticationRequestWrapper() {
}
/**
* @param auth authentication requests
*/
public AuthenticationRequestWrapper(AuthenticationRequest auth) {
this.auth = auth;
}
/**
* @return authentication request
*/
public AuthenticationRequest getAuth() {
return auth;
}
/**
* @param auth authentication request
*/
public void setAuth(AuthenticationRequest auth) {
this.auth = auth;
}
}

View File

@ -1,69 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
import org.apache.hadoop.fs.swift.auth.entities.AccessToken;
import org.apache.hadoop.fs.swift.auth.entities.Catalog;
import org.apache.hadoop.fs.swift.auth.entities.User;
import java.util.List;
/**
* Response from KeyStone deserialized into AuthenticationResponse class.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class AuthenticationResponse {
private Object metadata;
private List<Catalog> serviceCatalog;
private User user;
private AccessToken token;
public Object getMetadata() {
return metadata;
}
public void setMetadata(Object metadata) {
this.metadata = metadata;
}
public List<Catalog> getServiceCatalog() {
return serviceCatalog;
}
public void setServiceCatalog(List<Catalog> serviceCatalog) {
this.serviceCatalog = serviceCatalog;
}
public User getUser() {
return user;
}
public void setUser(User user) {
this.user = user;
}
public AccessToken getToken() {
return token;
}
public void setToken(AccessToken token) {
this.token = token;
}
}

View File

@ -1,62 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
import org.apache.hadoop.fs.swift.auth.entities.CatalogV3;
import org.apache.hadoop.fs.swift.auth.entities.Tenant;
import org.codehaus.jackson.annotate.JsonIgnoreProperties;
import java.util.List;
/**
* Response from KeyStone deserialized into AuthenticationResponse class.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
@JsonIgnoreProperties(ignoreUnknown = true)
public class AuthenticationResponseV3 {
private List<CatalogV3> catalog;
private String expires_at;
private Tenant project;
public List<CatalogV3> getCatalog() {
return catalog;
}
public void setCatalog(List<CatalogV3> catalog) {
this.catalog = catalog;
}
public String getExpires_at() {
return expires_at;
}
public void setExpires_at(String expires_at) {
this.expires_at = expires_at;
}
public Tenant getProject() {
return project;
}
public void setProject(Tenant project) {
this.project = project;
}
}

View File

@ -1,47 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* This class is used for correct hierarchy mapping of
* Keystone authentication model and java code
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class AuthenticationWrapper {
/**
* authentication response field
*/
private AuthenticationResponse access;
/**
* @return authentication response
*/
public AuthenticationResponse getAccess() {
return access;
}
/**
* @param access sets authentication response
*/
public void setAccess(AuthenticationResponse access) {
this.access = access;
}
}

View File

@ -1,47 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* This class is used for correct hierarchy mapping of
* Keystone authentication model and java code
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class AuthenticationWrapperV3 {
/**
* authentication response field
*/
private AuthenticationResponseV3 token;
/**
* @return authentication response
*/
public AuthenticationResponseV3 getToken() {
return token;
}
/**
* @param token sets authentication response
*/
public void setToken(AuthenticationResponseV3 token) {
this.token = token;
}
}

View File

@ -1,59 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* Class that represents authentication to OpenStack Keystone.
* Contains basic authentication information.
* Used when {@link ApiKeyAuthenticationRequest} is not applicable.
* (problem with different Keystone installations/versions/modifications)
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class KeyStoneAuthRequest extends AuthenticationRequestV2 {
/**
* Credentials for Keystone authentication
*/
private KeystoneApiKeyCredentials apiAccessKeyCredentials;
/**
* @param tenant Keystone tenant name for authentication
* @param apiAccessKeyCredentials Credentials for authentication
*/
public KeyStoneAuthRequest(String tenant, KeystoneApiKeyCredentials apiAccessKeyCredentials) {
this.apiAccessKeyCredentials = apiAccessKeyCredentials;
this.tenantName = tenant;
}
public KeystoneApiKeyCredentials getApiAccessKeyCredentials() {
return apiAccessKeyCredentials;
}
public void setApiAccessKeyCredentials(KeystoneApiKeyCredentials apiAccessKeyCredentials) {
this.apiAccessKeyCredentials = apiAccessKeyCredentials;
}
@Override
public String toString() {
return "KeyStoneAuthRequest as " +
"tenant '" + tenantName + "' "
+ apiAccessKeyCredentials;
}
}

View File

@ -1,66 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* Class for Keystone authentication.
* Used when {@link ApiKeyCredentials} is not applicable
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class KeystoneApiKeyCredentials {
/**
* User access key
*/
private String accessKey;
/**
* User access secret
*/
private String secretKey;
public KeystoneApiKeyCredentials(String accessKey, String secretKey) {
this.accessKey = accessKey;
this.secretKey = secretKey;
}
public String getAccessKey() {
return accessKey;
}
public void setAccessKey(String accessKey) {
this.accessKey = accessKey;
}
public String getSecretKey() {
return secretKey;
}
public void setSecretKey(String secretKey) {
this.secretKey = secretKey;
}
@Override
public String toString() {
return "user " +
"'" + accessKey + '\'' +
" with key of length " + ((secretKey == null) ? 0 : secretKey.length());
}
}

View File

@ -1,62 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* Class that represents authentication request to Openstack Keystone.
* Contains basic authentication information.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class PasswordAuthenticationRequest extends AuthenticationRequestV2 {
/**
* Credentials for login
*/
private PasswordCredentials passwordCredentials;
/**
* @param tenantName tenant
* @param passwordCredentials password credentials
*/
public PasswordAuthenticationRequest(String tenantName, PasswordCredentials passwordCredentials) {
this.tenantName = tenantName;
this.passwordCredentials = passwordCredentials;
}
/**
* @return credentials for login into Keystone
*/
public PasswordCredentials getPasswordCredentials() {
return passwordCredentials;
}
/**
* @param passwordCredentials credentials for login into Keystone
*/
public void setPasswordCredentials(PasswordCredentials passwordCredentials) {
this.passwordCredentials = passwordCredentials;
}
@Override
public String toString() {
return "Authenticate as " +
"tenant '" + tenantName + "' "
+ passwordCredentials;
}
}

View File

@ -1,167 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
import java.util.HashMap;
import java.util.Map;
import org.codehaus.jackson.annotate.JsonProperty;
import org.codehaus.jackson.annotate.JsonWriteNullProperties;
/**
* Class that represents authentication request to Openstack Keystone v3.
* Contains basic authentication information.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
@JsonWriteNullProperties(false)
public class PasswordAuthenticationRequestV3 extends AuthenticationRequestV3 {
/**
* Credentials for login
*/
private final IdentityWrapper identity;
private final ScopeWrapper scope;
public PasswordAuthenticationRequestV3(ScopeWrapper scope,
PasswordCredentialsV3 passwordCreds) {
this.identity = new IdentityWrapper(new PasswordWrapper(passwordCreds));
this.scope = scope;
}
public PasswordAuthenticationRequestV3(String projectName,
PasswordCredentialsV3 passwordCreds) {
this(projectName == null ? null :
new ScopeWrapper(new ProjectWrapper(projectName, passwordCreds.domain)),
passwordCreds);
}
public IdentityWrapper getIdentity() {
return identity;
}
public ScopeWrapper getScope() {
return scope;
}
@Override
public String toString() {
return "Authenticate(v3) as " + identity.getPassword().getUser();
}
public static class IdentityWrapper {
private final PasswordWrapper password;
private final String[] methods;
public IdentityWrapper(PasswordWrapper password) {
this.password = password;
this.methods = new String[]{"password"};
}
public PasswordWrapper getPassword() {
return password;
}
public String[] getMethods() {
return methods;
}
}
/**
* THIS CLASS IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public static class PasswordWrapper {
private final PasswordCredentialsV3 user;
public PasswordWrapper(PasswordCredentialsV3 user) {
this.user = user;
}
public PasswordCredentialsV3 getUser() {
return user;
}
}
/**
* THIS CLASS IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
@JsonWriteNullProperties(false)
public static class ScopeWrapper {
private final ProjectWrapper project;
private final TrustWrapper trust;
public ScopeWrapper(ProjectWrapper project) {
this.project = project;
this.trust = null;
}
public ScopeWrapper(TrustWrapper trust) {
this.project = null;
this.trust = trust;
}
public ProjectWrapper getProject() {
return project;
}
@JsonProperty("OS-TRUST:trust")
public TrustWrapper getTrust() {
return trust;
}
}
/**
* THIS CLASS IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public static class ProjectWrapper {
private final String name;
private final Map<String, String> domain;
public ProjectWrapper(String projectName, Map<String, String> domain) {
this.domain = domain;
this.name = projectName;
}
public String getName() {
return name;
}
public Map<String, String> getDomain() {
return domain;
}
}
/**
* THIS CLASS IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public static class TrustWrapper {
private final String id;
public TrustWrapper(String trustId) {
id = trustId;
}
public String getId() {
return id;
}
}
}

View File

@ -1,87 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* Describes credentials to log in Swift using Keystone authentication.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class PasswordCredentials {
/**
* user login
*/
private String username;
/**
* user password
*/
private String password;
/**
* default constructor
*/
public PasswordCredentials() {
}
/**
* @param username user login
* @param password user password
*/
public PasswordCredentials(String username, String password) {
this.username = username;
this.password = password;
}
/**
* @return user password
*/
public String getPassword() {
return password;
}
/**
* @param password user password
*/
public void setPassword(String password) {
this.password = password;
}
/**
* @return login
*/
public String getUsername() {
return username;
}
/**
* @param username login
*/
public void setUsername(String username) {
this.username = username;
}
@Override
public String toString() {
return "user '" + username + '\'' +
" with password of length " + ((password == null) ? 0 : password.length());
}
}

View File

@ -1,104 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
import java.util.HashMap;
import java.util.Map;
/**
* Describes credentials to log in Swift using Keystone v3 authentication.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class PasswordCredentialsV3 {
/**
* user login
*/
private String name;
/**
* user password
*/
private String password;
/**
* user's domain name
*/
public final Map<String,String> domain;
/**
* @param name user login
* @param password user password
* @param domain_name user's domain name
*/
public PasswordCredentialsV3(String name, String password, String domain_name, String domain_id) {
this.name = name;
this.password = password;
this.domain = new HashMap();
if (domain_id != null) {
this.domain.put("id", domain_id);
} else if (domain_name != null) {
this.domain.put("name", domain_name);
} else {
this.domain.put("id", "default");
}
}
/**
* @return user password
*/
public String getPassword() {
return password;
}
/**
* @param password user password
*/
public void setPassword(String password) {
this.password = password;
}
/**
* @return login
*/
public String getName() {
return name;
}
/**
* @param username login
*/
public void setName(String name) {
this.name = name;
}
@Override
public String toString() {
String domain_info;
if (domain.containsKey("id")) {
domain_info = "domain id '" + domain.get("id") + "'";
} else {
domain_info = "domain name '" + domain.get("name") + "'";
}
return "user '" + name + '\'' +
" with password of length " + ((password == null) ? 0 : password.length()) +
", " + domain_info;
}
}

View File

@ -1,97 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* Describes user roles in Openstack system.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class Roles {
/**
* role name
*/
private String name;
/**
* This field user in RackSpace auth model
*/
private String id;
/**
* This field user in RackSpace auth model
*/
private String description;
/**
* Service id used in HP public Cloud
*/
private String serviceId;
/**
* Service id used in HP public Cloud
*/
private String tenantId;
/**
* @return role name
*/
public String getName() {
return name;
}
/**
* @param name role name
*/
public void setName(String name) {
this.name = name;
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public String getServiceId() {
return serviceId;
}
public void setServiceId(String serviceId) {
this.serviceId = serviceId;
}
public String getTenantId() {
return tenantId;
}
public void setTenantId(String tenantId) {
this.tenantId = tenantId;
}
}

View File

@ -1,83 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* Class that represents authentication request to Openstack Keystone v3.
* Contains basic authentication information.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class TokenAuthenticationRequestV3 extends AuthenticationRequestV3 {
/**
* Credentials for login.
*/
private final IdentityWrapper identity;
public TokenAuthenticationRequestV3(String token) {
this.identity = new IdentityWrapper(new TokenWrapper(token));
}
public IdentityWrapper getIdentity() {
return identity;
}
@Override
public String toString() {
return "Authenticate(v3) as token";
}
/**
* THIS CLASS IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public static class IdentityWrapper {
private final TokenWrapper token;
private final String[] methods;
public IdentityWrapper(TokenWrapper token) {
this.token = token;
this.methods = new String[]{"token"};
}
public String[] getMethods() {
return methods;
}
public TokenWrapper getToken() {
return token;
}
}
/**
* THIS CLASS IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public static class TokenWrapper {
private final String token;
public TokenWrapper(String token) {
this.token = token;
}
public String getId() {
return token;
}
}
}

View File

@ -1,40 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth;
/**
* Class that represents authentication request to Openstack Keystone v3.
* Contains basic authentication information.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class TrustAuthenticationRequest extends PasswordAuthenticationRequestV3 {
public TrustAuthenticationRequest(PasswordCredentialsV3 passwordCredentials,
String trustId) {
super(new ScopeWrapper(new TrustWrapper(trustId)), passwordCredentials);
}
@Override
public String toString() {
return super.toString() +
", trust-id '" + getScope().getTrust().getId() + "'";
}
}

View File

@ -1,108 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth.entities;
import org.codehaus.jackson.annotate.JsonIgnoreProperties;
/**
* Access token representation of Openstack Keystone authentication.
* Class holds token id, tenant and expiration time.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*
* Example:
* <pre>
* "token" : {
* "RAX-AUTH:authenticatedBy" : [ "APIKEY" ],
* "expires" : "2013-07-12T05:19:24.685-05:00",
* "id" : "8bbea4215113abdab9d4c8fb0d37",
* "tenant" : { "id" : "01011970",
* "name" : "77777"
* }
* }
* </pre>
*/
@JsonIgnoreProperties(ignoreUnknown = true)
public class AccessToken {
/**
* token expiration time
*/
private String expires;
/**
* token id
*/
private String id;
/**
* tenant name for whom id is attached
*/
private Tenant tenant;
/**
* @return token expiration time
*/
public String getExpires() {
return expires;
}
/**
* @param expires the token expiration time
*/
public void setExpires(String expires) {
this.expires = expires;
}
/**
* @return token value
*/
public String getId() {
return id;
}
/**
* @param id token value
*/
public void setId(String id) {
this.id = id;
}
/**
* @return tenant authenticated in Openstack Keystone
*/
public Tenant getTenant() {
return tenant;
}
/**
* @param tenant tenant authenticated in Openstack Keystone
*/
public void setTenant(Tenant tenant) {
this.tenant = tenant;
}
@Override
public String toString() {
return "AccessToken{" +
"id='" + id + '\'' +
", tenant=" + tenant +
", expires='" + expires + '\'' +
'}';
}
}

View File

@ -1,107 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth.entities;
import org.codehaus.jackson.annotate.JsonIgnoreProperties;
import java.util.List;
/**
* Describes Openstack Swift REST endpoints.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
@JsonIgnoreProperties(ignoreUnknown = true)
public class Catalog {
/**
* List of valid swift endpoints
*/
private List<Endpoint> endpoints;
/**
* endpoint links are additional information description
* which aren't used in Hadoop and Swift integration scope
*/
private List<Object> endpoints_links;
/**
* Openstack REST service name. In our case name = "keystone"
*/
private String name;
/**
* Type of REST service. In our case type = "identity"
*/
private String type;
/**
* @return List of endpoints
*/
public List<Endpoint> getEndpoints() {
return endpoints;
}
/**
* @param endpoints list of endpoints
*/
public void setEndpoints(List<Endpoint> endpoints) {
this.endpoints = endpoints;
}
/**
* @return list of endpoint links
*/
public List<Object> getEndpoints_links() {
return endpoints_links;
}
/**
* @param endpoints_links list of endpoint links
*/
public void setEndpoints_links(List<Object> endpoints_links) {
this.endpoints_links = endpoints_links;
}
/**
* @return name of Openstack REST service
*/
public String getName() {
return name;
}
/**
* @param name of Openstack REST service
*/
public void setName(String name) {
this.name = name;
}
/**
* @return type of Openstack REST service
*/
public String getType() {
return type;
}
/**
* @param type of REST service
*/
public void setType(String type) {
this.type = type;
}
}

View File

@ -1,88 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth.entities;
import org.codehaus.jackson.annotate.JsonIgnoreProperties;
import java.util.List;
/**
* Describes Openstack Swift REST endpoints.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
@JsonIgnoreProperties(ignoreUnknown = true)
public class CatalogV3 {
/**
* List of valid swift endpoints
*/
private List<EndpointV3> endpoints;
/**
* Openstack REST service name. In our case name = "keystone"
*/
private String name;
/**
* Type of REST service. In our case type = "identity"
*/
private String type;
/**
* @return List of endpoints
*/
public List<EndpointV3> getEndpoints() {
return endpoints;
}
/**
* @param endpoints list of endpoints
*/
public void setEndpoints(List<EndpointV3> endpoints) {
this.endpoints = endpoints;
}
/**
* @return name of Openstack REST service
*/
public String getName() {
return name;
}
/**
* @param name of Openstack REST service
*/
public void setName(String name) {
this.name = name;
}
/**
* @return type of Openstack REST service
*/
public String getType() {
return type;
}
/**
* @param type of REST service
*/
public void setType(String type) {
this.type = type;
}
}

View File

@ -1,194 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth.entities;
import org.codehaus.jackson.annotate.JsonIgnoreProperties;
import java.net.URI;
/**
* Openstack Swift endpoint description.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
@JsonIgnoreProperties(ignoreUnknown = true)
public class Endpoint {
/**
* endpoint id
*/
private String id;
/**
* Keystone admin URL
*/
private URI adminURL;
/**
* Keystone internal URL
*/
private URI internalURL;
/**
* public accessible URL
*/
private URI publicURL;
/**
* public accessible URL#2
*/
private URI publicURL2;
/**
* Openstack region name
*/
private String region;
/**
* This field is used in RackSpace authentication model
*/
private String tenantId;
/**
* This field user in RackSpace auth model
*/
private String versionId;
/**
* This field user in RackSpace auth model
*/
private String versionInfo;
/**
* This field user in RackSpace auth model
*/
private String versionList;
/**
* @return endpoint id
*/
public String getId() {
return id;
}
/**
* @param id endpoint id
*/
public void setId(String id) {
this.id = id;
}
/**
* @return Keystone admin URL
*/
public URI getAdminURL() {
return adminURL;
}
/**
* @param adminURL Keystone admin URL
*/
public void setAdminURL(URI adminURL) {
this.adminURL = adminURL;
}
/**
* @return internal Keystone
*/
public URI getInternalURL() {
return internalURL;
}
/**
* @param internalURL Keystone internal URL
*/
public void setInternalURL(URI internalURL) {
this.internalURL = internalURL;
}
/**
* @return public accessible URL
*/
public URI getPublicURL() {
return publicURL;
}
/**
* @param publicURL public URL
*/
public void setPublicURL(URI publicURL) {
this.publicURL = publicURL;
}
public URI getPublicURL2() {
return publicURL2;
}
public void setPublicURL2(URI publicURL2) {
this.publicURL2 = publicURL2;
}
/**
* @return Openstack region name
*/
public String getRegion() {
return region;
}
/**
* @param region Openstack region name
*/
public void setRegion(String region) {
this.region = region;
}
public String getTenantId() {
return tenantId;
}
public void setTenantId(String tenantId) {
this.tenantId = tenantId;
}
public String getVersionId() {
return versionId;
}
public void setVersionId(String versionId) {
this.versionId = versionId;
}
public String getVersionInfo() {
return versionInfo;
}
public void setVersionInfo(String versionInfo) {
this.versionInfo = versionInfo;
}
public String getVersionList() {
return versionList;
}
public void setVersionList(String versionList) {
this.versionList = versionList;
}
}

View File

@ -1,102 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth.entities;
import org.codehaus.jackson.annotate.JsonIgnoreProperties;
import java.net.URI;
/**
* Openstack Swift endpoint description.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
@JsonIgnoreProperties(ignoreUnknown = true)
public class EndpointV3 {
/**
* endpoint id
*/
private String id;
/**
* Keystone URL
*/
private URI url;
/**
* Openstack region name
*/
private String region;
/**
* Keystone URL type
*/
private String iface;
/**
* @return endpoint id
*/
public String getId() {
return id;
}
/**
* @param id endpoint id
*/
public void setId(String id) {
this.id = id;
}
/**
* @return Keystone URL
*/
public URI getUrl() {
return url;
}
/**
* @param adminURL Keystone admin URL
*/
public void setUrl(URI url) {
this.url = url;
}
/**
* @return Openstack region name
*/
public String getRegion() {
return region;
}
/**
* @param region Openstack region name
*/
public void setRegion(String region) {
this.region = region;
}
public String getInterface() {
return iface;
}
public void setInterface(String iface) {
this.iface = iface;
}
}

View File

@ -1,107 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth.entities;
import org.codehaus.jackson.annotate.JsonIgnoreProperties;
/**
* Tenant is abstraction in Openstack which describes all account
* information and user privileges in system.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
@JsonIgnoreProperties(ignoreUnknown = true)
public class Tenant {
/**
* tenant id
*/
private String id;
/**
* tenant short description which Keystone returns
*/
private String description;
/**
* boolean enabled user account or no
*/
private boolean enabled;
/**
* tenant human readable name
*/
private String name;
/**
* @return tenant name
*/
public String getName() {
return name;
}
/**
* @param name tenant name
*/
public void setName(String name) {
this.name = name;
}
/**
* @return true if account enabled and false otherwise
*/
public boolean isEnabled() {
return enabled;
}
/**
* @param enabled enable or disable
*/
public void setEnabled(boolean enabled) {
this.enabled = enabled;
}
/**
* @return account short description
*/
public String getDescription() {
return description;
}
/**
* @param description set account description
*/
public void setDescription(String description) {
this.description = description;
}
/**
* @return set tenant id
*/
public String getId() {
return id;
}
/**
* @param id tenant id
*/
public void setId(String id) {
this.id = id;
}
}

View File

@ -1,132 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.auth.entities;
import org.apache.hadoop.fs.swift.auth.Roles;
import org.codehaus.jackson.annotate.JsonIgnoreProperties;
import java.util.List;
/**
* Describes user entity in Keystone
* In different Swift installations User is represented differently.
* To avoid any JSON deserialization failures this entity is ignored.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
@JsonIgnoreProperties(ignoreUnknown = true)
public class User {
/**
* user id in Keystone
*/
private String id;
/**
* user human readable name
*/
private String name;
/**
* user roles in Keystone
*/
private List<Roles> roles;
/**
* links to user roles
*/
private List<Object> roles_links;
/**
* human readable username in Keystone
*/
private String username;
/**
* @return user id
*/
public String getId() {
return id;
}
/**
* @param id user id
*/
public void setId(String id) {
this.id = id;
}
/**
* @return user name
*/
public String getName() {
return name;
}
/**
* @param name user name
*/
public void setName(String name) {
this.name = name;
}
/**
* @return user roles
*/
public List<Roles> getRoles() {
return roles;
}
/**
* @param roles sets user roles
*/
public void setRoles(List<Roles> roles) {
this.roles = roles;
}
/**
* @return user roles links
*/
public List<Object> getRoles_links() {
return roles_links;
}
/**
* @param roles_links user roles links
*/
public void setRoles_links(List<Object> roles_links) {
this.roles_links = roles_links;
}
/**
* @return username
*/
public String getUsername() {
return username;
}
/**
* @param username human readable user name
*/
public void setUsername(String username) {
this.username = username;
}
}

View File

@ -1,48 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
import org.apache.commons.httpclient.HttpMethod;
import java.net.URI;
/**
* An exception raised when an authentication request was rejected
*/
public class SwiftAuthenticationFailedException extends SwiftInvalidResponseException {
public SwiftAuthenticationFailedException(String message,
int statusCode,
String operation,
URI uri) {
super(message, statusCode, operation, uri);
}
public SwiftAuthenticationFailedException(String message,
String operation,
URI uri,
HttpMethod method) {
super(message, operation, uri, method);
}
@Override
public String exceptionTitle() {
return "Authentication Failure";
}
}

View File

@ -1,49 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
import org.apache.commons.httpclient.HttpMethod;
import java.net.URI;
/**
* Thrown to indicate that data locality can't be calculated or requested path is incorrect.
* Data locality can't be calculated if Openstack Swift version is old.
*/
public class SwiftBadRequestException extends SwiftInvalidResponseException {
public SwiftBadRequestException(String message,
String operation,
URI uri,
HttpMethod method) {
super(message, operation, uri, method);
}
public SwiftBadRequestException(String message,
int statusCode,
String operation,
URI uri) {
super(message, statusCode, operation, uri);
}
@Override
public String exceptionTitle() {
return "BadRequest";
}
}

View File

@ -1,33 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
/**
* Exception raised to indicate there is some problem with how the Swift FS
* is configured
*/
public class SwiftConfigurationException extends SwiftException {
public SwiftConfigurationException(String message) {
super(message);
}
public SwiftConfigurationException(String message, Throwable cause) {
super(message, cause);
}
}

View File

@ -1,36 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
/**
* Exception raised when an attempt is made to use a closed stream
*/
public class SwiftConnectionClosedException extends SwiftException {
public static final String MESSAGE =
"Connection to Swift service has been closed";
public SwiftConnectionClosedException() {
super(MESSAGE);
}
public SwiftConnectionClosedException(String reason) {
super(MESSAGE + ": " + reason);
}
}

View File

@ -1,35 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
/**
* Thrown to indicate that connection is lost or failed to be made
*/
public class SwiftConnectionException extends SwiftException {
public SwiftConnectionException() {
}
public SwiftConnectionException(String message) {
super(message);
}
public SwiftConnectionException(String message, Throwable cause) {
super(message, cause);
}
}

View File

@ -1,43 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
import java.io.IOException;
/**
* A Swift-specific exception -subclasses exist
* for various specific problems.
*/
public class SwiftException extends IOException {
public SwiftException() {
super();
}
public SwiftException(String message) {
super(message);
}
public SwiftException(String message, Throwable cause) {
super(message, cause);
}
public SwiftException(Throwable cause) {
super(cause);
}
}

View File

@ -1,38 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
/**
* The internal state of the Swift client is wrong -presumably a sign
* of some bug
*/
public class SwiftInternalStateException extends SwiftException {
public SwiftInternalStateException(String message) {
super(message);
}
public SwiftInternalStateException(String message, Throwable cause) {
super(message, cause);
}
public SwiftInternalStateException(Throwable cause) {
super(cause);
}
}

View File

@ -1,114 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
import org.apache.commons.httpclient.HttpMethod;
import java.io.IOException;
import java.net.URI;
/**
* Exception raised when the HTTP code is invalid. The status code,
* method name and operation URI are all in the response.
*/
public class SwiftInvalidResponseException extends SwiftConnectionException {
public final int statusCode;
public final String operation;
public final URI uri;
public final String body;
public SwiftInvalidResponseException(String message,
int statusCode,
String operation,
URI uri) {
super(message);
this.statusCode = statusCode;
this.operation = operation;
this.uri = uri;
this.body = "";
}
public SwiftInvalidResponseException(String message,
String operation,
URI uri,
HttpMethod method) {
super(message);
this.statusCode = method.getStatusCode();
this.operation = operation;
this.uri = uri;
String bodyAsString;
try {
bodyAsString = method.getResponseBodyAsString();
} catch (IOException e) {
bodyAsString = "";
}
this.body = bodyAsString;
}
public int getStatusCode() {
return statusCode;
}
public String getOperation() {
return operation;
}
public URI getUri() {
return uri;
}
public String getBody() {
return body;
}
/**
* Override point: title of an exception -this is used in the
* toString() method.
* @return the new exception title
*/
public String exceptionTitle() {
return "Invalid Response";
}
/**
* Build a description that includes the exception title, the URI,
* the message, the status code -and any body of the response
* @return the string value for display
*/
@Override
public String toString() {
StringBuilder msg = new StringBuilder(128 + body.length());
msg.append(exceptionTitle());
msg.append(": ");
msg.append(getMessage());
msg.append(" ");
msg.append(operation);
msg.append(" ");
msg.append(uri);
msg.append(" => ");
msg.append(statusCode);
if (!body.isEmpty()) {
msg.append(" : ");
msg.append(body);
}
return msg.toString();
}
}

View File

@ -1,33 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
/**
* Exception raised when the J/O mapping fails.
*/
public class SwiftJsonMarshallingException extends SwiftException {
public SwiftJsonMarshallingException(String message) {
super(message);
}
public SwiftJsonMarshallingException(String message, Throwable cause) {
super(message, cause);
}
}

View File

@ -1,43 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
import org.apache.hadoop.fs.Path;
/**
* Exception raised when an operation is meant to work on a directory, but
* the target path is not a directory
*/
public class SwiftNotDirectoryException extends SwiftException {
private final Path path;
public SwiftNotDirectoryException(Path path) {
this(path, "");
}
public SwiftNotDirectoryException(Path path,
String message) {
super(path.toString() + message);
this.path = path;
}
public Path getPath() {
return path;
}
}

View File

@ -1,35 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
/**
* Used to relay exceptions upstream from the inner implementation
* to the public API, where it is downgraded to a log+failure.
* Making it visible internally aids testing
*/
public class SwiftOperationFailedException extends SwiftException {
public SwiftOperationFailedException(String message) {
super(message);
}
public SwiftOperationFailedException(String message, Throwable cause) {
super(message, cause);
}
}

View File

@ -1,33 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
/**
* Exception raised when trying to create a file that already exists
* and the overwrite flag is set to false.
*/
public class SwiftPathExistsException extends SwiftException {
public SwiftPathExistsException(String message) {
super(message);
}
public SwiftPathExistsException(String message, Throwable cause) {
super(message, cause);
}
}

View File

@ -1,37 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
import org.apache.commons.httpclient.HttpMethod;
import java.net.URI;
/**
* Exception raised if a Swift endpoint returned a HTTP response indicating
* the caller is being throttled.
*/
public class SwiftThrottledRequestException extends
SwiftInvalidResponseException {
public SwiftThrottledRequestException(String message,
String operation,
URI uri,
HttpMethod method) {
super(message, operation, uri, method);
}
}

View File

@ -1,30 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.exceptions;
/**
* Exception raised on an unsupported feature in the FS API -such as
* <code>append()</code>
*/
public class SwiftUnsupportedFeatureException extends SwiftException {
public SwiftUnsupportedFeatureException(String message) {
super(message);
}
}

View File

@ -1,41 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.http;
import org.apache.commons.httpclient.methods.EntityEnclosingMethod;
/**
* Implementation for SwiftRestClient to make copy requests.
* COPY is a method that came with WebDAV (RFC2518), and is not something that
* can be handled by all proxies en-route to a filesystem.
*/
class CopyMethod extends EntityEnclosingMethod {
public CopyMethod(String uri) {
super(uri);
}
/**
* @return http method name
*/
@Override
public String getName() {
return "COPY";
}
}

View File

@ -1,99 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.http;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import java.io.IOException;
import java.lang.reflect.Constructor;
import java.net.ConnectException;
import java.net.NoRouteToHostException;
import java.net.SocketTimeoutException;
import java.net.UnknownHostException;
/**
* Variant of Hadoop Netutils exception wrapping with URI awareness and
* available in branch-1 too.
*/
public class ExceptionDiags {
private static final Log LOG = LogFactory.getLog(ExceptionDiags.class);
/** text to point users elsewhere: {@value} */
private static final String FOR_MORE_DETAILS_SEE
= " For more details see: ";
/** text included in wrapped exceptions if the host is null: {@value} */
public static final String UNKNOWN_HOST = "(unknown)";
/** Base URL of the Hadoop Wiki: {@value} */
public static final String HADOOP_WIKI = "http://wiki.apache.org/hadoop/";
/**
* Take an IOException and a URI, wrap it where possible with
* something that includes the URI
*
* @param dest target URI
* @param operation operation
* @param exception the caught exception.
* @return an exception to throw
*/
public static IOException wrapException(final String dest,
final String operation,
final IOException exception) {
String action = operation + " " + dest;
String xref = null;
if (exception instanceof ConnectException) {
xref = "ConnectionRefused";
} else if (exception instanceof UnknownHostException) {
xref = "UnknownHost";
} else if (exception instanceof SocketTimeoutException) {
xref = "SocketTimeout";
} else if (exception instanceof NoRouteToHostException) {
xref = "NoRouteToHost";
}
String msg = action
+ " failed on exception: "
+ exception;
if (xref != null) {
msg = msg + ";" + see(xref);
}
return wrapWithMessage(exception, msg);
}
private static String see(final String entry) {
return FOR_MORE_DETAILS_SEE + HADOOP_WIKI + entry;
}
@SuppressWarnings("unchecked")
private static <T extends IOException> T wrapWithMessage(
T exception, String msg) {
Class<? extends Throwable> clazz = exception.getClass();
try {
Constructor<? extends Throwable> ctor =
clazz.getConstructor(String.class);
Throwable t = ctor.newInstance(msg);
return (T) (t.initCause(exception));
} catch (Throwable e) {
LOG.warn("Unable to wrap exception of type " +
clazz + ": it has no (String) constructor", e);
return exception;
}
}
}

View File

@ -1,45 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.http;
/**
* Response tuple from GET operations; combines the input stream with the content length
*/
public class HttpBodyContent {
private final long contentLength;
private final HttpInputStreamWithRelease inputStream;
/**
* build a body response
* @param inputStream input stream from the operatin
* @param contentLength length of content; may be -1 for "don't know"
*/
public HttpBodyContent(HttpInputStreamWithRelease inputStream,
long contentLength) {
this.contentLength = contentLength;
this.inputStream = inputStream;
}
public long getContentLength() {
return contentLength;
}
public HttpInputStreamWithRelease getInputStream() {
return inputStream;
}
}

View File

@ -1,235 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.http;
import org.apache.commons.httpclient.HttpMethod;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.fs.swift.exceptions.SwiftConnectionClosedException;
import org.apache.hadoop.fs.swift.util.SwiftUtils;
import java.io.ByteArrayInputStream;
import java.io.EOFException;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
/**
* This replaces the input stream release class from JetS3t and AWS;
* # Failures in the constructor are relayed up instead of simply logged.
* # it is set up to be more robust at teardown
* # release logic is thread safe
* Note that the thread safety of the inner stream contains no thread
* safety guarantees -this stream is not to be read across streams.
* The thread safety logic here is to ensure that even if somebody ignores
* that rule, the release code does not get entered twice -and that
* any release in one thread is picked up by read operations in all others.
*/
public class HttpInputStreamWithRelease extends InputStream {
private static final Log LOG =
LogFactory.getLog(HttpInputStreamWithRelease.class);
private final URI uri;
private HttpMethod method;
//flag to say the stream is released -volatile so that read operations
//pick it up even while unsynchronized.
private volatile boolean released;
//volatile flag to verify that data is consumed.
private volatile boolean dataConsumed;
private InputStream inStream;
/**
* In debug builds, this is filled in with the construction-time
* stack, which is then included in logs from the finalize(), method.
*/
private final Exception constructionStack;
/**
* Why the stream is closed
*/
private String reasonClosed = "unopened";
public HttpInputStreamWithRelease(URI uri, HttpMethod method) throws
IOException {
this.uri = uri;
this.method = method;
constructionStack = LOG.isDebugEnabled() ? new Exception("stack") : null;
if (method == null) {
throw new IllegalArgumentException("Null 'method' parameter ");
}
try {
inStream = method.getResponseBodyAsStream();
} catch (IOException e) {
inStream = new ByteArrayInputStream(new byte[]{});
throw releaseAndRethrow("getResponseBodyAsStream() in constructor -" + e, e);
}
}
@Override
public void close() throws IOException {
release("close()", null);
}
/**
* Release logic
* @param reason reason for release (used in debug messages)
* @param ex exception that is a cause -null for non-exceptional releases
* @return true if the release took place here
* @throws IOException if the abort or close operations failed.
*/
private synchronized boolean release(String reason, Exception ex) throws
IOException {
if (!released) {
reasonClosed = reason;
try {
if (LOG.isDebugEnabled()) {
LOG.debug("Releasing connection to " + uri + ": " + reason, ex);
}
if (method != null) {
if (!dataConsumed) {
method.abort();
}
method.releaseConnection();
}
if (inStream != null) {
//this guard may seem un-needed, but a stack trace seen
//on the JetS3t predecessor implied that it
//is useful
inStream.close();
}
return true;
} finally {
//if something went wrong here, we do not want the release() operation
//to try and do anything in advance.
released = true;
dataConsumed = true;
}
} else {
return false;
}
}
/**
* Release the method, using the exception as a cause
* @param operation operation that failed
* @param ex the exception which triggered it.
* @return the exception to throw
*/
private IOException releaseAndRethrow(String operation, IOException ex) {
try {
release(operation, ex);
} catch (IOException ioe) {
LOG.debug("Exception during release: " + operation + " - " + ioe, ioe);
//make this the exception if there was none before
if (ex == null) {
ex = ioe;
}
}
return ex;
}
/**
* Assume that the connection is not released: throws an exception if it is
* @throws SwiftConnectionClosedException
*/
private synchronized void assumeNotReleased() throws SwiftConnectionClosedException {
if (released || inStream == null) {
throw new SwiftConnectionClosedException(reasonClosed);
}
}
@Override
public int available() throws IOException {
assumeNotReleased();
try {
return inStream.available();
} catch (IOException e) {
throw releaseAndRethrow("available() failed -" + e, e);
}
}
@Override
public int read() throws IOException {
assumeNotReleased();
int read = 0;
try {
read = inStream.read();
} catch (EOFException e) {
if (LOG.isDebugEnabled()) {
LOG.debug("EOF exception " + e, e);
}
read = -1;
} catch (IOException e) {
throw releaseAndRethrow("read()", e);
}
if (read < 0) {
dataConsumed = true;
release("read() -all data consumed", null);
}
return read;
}
@Override
public int read(byte[] b, int off, int len) throws IOException {
SwiftUtils.validateReadArgs(b, off, len);
//if the stream is already closed, then report an exception.
assumeNotReleased();
//now read in a buffer, reacting differently to different operations
int read;
try {
read = inStream.read(b, off, len);
} catch (EOFException e) {
if (LOG.isDebugEnabled()) {
LOG.debug("EOF exception " + e, e);
}
read = -1;
} catch (IOException e) {
throw releaseAndRethrow("read(b, off, " + len + ")", e);
}
if (read < 0) {
dataConsumed = true;
release("read() -all data consumed", null);
}
return read;
}
/**
* Finalizer does release the stream, but also logs at WARN level
* including the URI at fault
*/
@Override
protected void finalize() {
try {
if (release("finalize()", constructionStack)) {
LOG.warn("input stream of " + uri
+ " not closed properly -cleaned up in finalize()");
}
} catch (Exception e) {
//swallow anything that failed here
LOG.warn("Exception while releasing " + uri + "in finalizer",
e);
}
}
@Override
public String toString() {
return "HttpInputStreamWithRelease working with " + uri
+" released=" + released
+" dataConsumed=" + dataConsumed;
}
}

View File

@ -1,230 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.http;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException;
import java.net.URI;
import java.util.Properties;
import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.*;
/**
* This class implements the binding logic between Hadoop configurations
* and the swift rest client.
* <p/>
* The swift rest client takes a Properties instance containing
* the string values it uses to bind to a swift endpoint.
* <p/>
* This class extracts the values for a specific filesystem endpoint
* and then builds an appropriate Properties file.
*/
public final class RestClientBindings {
private static final Log LOG = LogFactory.getLog(RestClientBindings.class);
public static final String E_INVALID_NAME = "Invalid swift hostname '%s':" +
" hostname must in form container.service";
/**
* Public for testing : build the full prefix for use in resolving
* configuration items
*
* @param service service to use
* @return the prefix string <i>without any trailing "."</i>
*/
public static String buildSwiftInstancePrefix(String service) {
return SWIFT_SERVICE_PREFIX + service;
}
/**
* Raise an exception for an invalid service name
*
* @param hostname hostname that was being parsed
* @return an exception to throw
*/
private static SwiftConfigurationException invalidName(String hostname) {
return new SwiftConfigurationException(
String.format(E_INVALID_NAME, hostname));
}
/**
* Get the container name from the hostname -the single element before the
* first "." in the hostname
*
* @param hostname hostname to split
* @return the container
* @throws SwiftConfigurationException
*/
public static String extractContainerName(String hostname) throws
SwiftConfigurationException {
int i = hostname.indexOf(".");
if (i <= 0) {
throw invalidName(hostname);
}
return hostname.substring(0, i);
}
public static String extractContainerName(URI uri) throws
SwiftConfigurationException {
return extractContainerName(uri.getHost());
}
/**
* Get the service name from a longer hostname string
*
* @param hostname hostname
* @return the separated out service name
* @throws SwiftConfigurationException if the hostname was invalid
*/
public static String extractServiceName(String hostname) throws
SwiftConfigurationException {
int i = hostname.indexOf(".");
if (i <= 0) {
throw invalidName(hostname);
}
String service = hostname.substring(i + 1);
if (service.isEmpty() || service.contains(".")) {
//empty service contains dots in -not currently supported
throw invalidName(hostname);
}
return service;
}
public static String extractServiceName(URI uri) throws
SwiftConfigurationException {
return extractServiceName(uri.getHost());
}
/**
* Build a properties instance bound to the configuration file -using
* the filesystem URI as the source of the information.
*
* @param fsURI filesystem URI
* @param conf configuration
* @return a properties file with the instance-specific properties extracted
* and bound to the swift client properties.
* @throws SwiftConfigurationException if the configuration is invalid
*/
public static Properties bind(URI fsURI, Configuration conf) throws
SwiftConfigurationException {
String host = fsURI.getHost();
if (host == null || host.isEmpty()) {
//expect shortnames -> conf names
throw invalidName(host);
}
String container = extractContainerName(host);
String service = extractServiceName(host);
//build filename schema
String prefix = buildSwiftInstancePrefix(service);
if (LOG.isDebugEnabled()) {
LOG.debug("Filesystem " + fsURI
+ " is using configuration keys " + prefix);
}
Properties props = new Properties();
props.setProperty(SWIFT_SERVICE_PROPERTY, service);
props.setProperty(SWIFT_CONTAINER_PROPERTY, container);
copy(conf, prefix + DOT_AUTH_URL, props, SWIFT_AUTH_PROPERTY, true);
copy(conf, prefix + DOT_AUTH_ENDPOINT_PREFIX, props,
SWIFT_AUTH_ENDPOINT_PREFIX, true);
copy(conf, prefix + DOT_USERNAME, props, SWIFT_USERNAME_PROPERTY, true);
copy(conf, prefix + DOT_APIKEY, props, SWIFT_APIKEY_PROPERTY, false);
copy(conf, prefix + DOT_PASSWORD, props, SWIFT_PASSWORD_PROPERTY,
props.contains(SWIFT_APIKEY_PROPERTY) ? true : false);
copy(conf, prefix + DOT_TRUST_ID, props, SWIFT_TRUST_ID_PROPERTY, false);
copy(conf, prefix + DOT_DOMAIN_NAME, props, SWIFT_DOMAIN_NAME_PROPERTY, false);
copy(conf, prefix + DOT_DOMAIN_ID, props, SWIFT_DOMAIN_ID_PROPERTY, false);
copy(conf, prefix + DOT_TENANT, props, SWIFT_TENANT_PROPERTY, false);
copy(conf, prefix + DOT_CONTAINER_TENANT, props, SWIFT_CONTAINER_TENANT_PROPERTY, false);
copy(conf, prefix + DOT_REGION, props, SWIFT_REGION_PROPERTY, false);
copy(conf, prefix + DOT_HTTP_PORT, props, SWIFT_HTTP_PORT_PROPERTY, false);
copy(conf, prefix +
DOT_HTTPS_PORT, props, SWIFT_HTTPS_PORT_PROPERTY, false);
copyBool(conf, prefix + DOT_PUBLIC, props, SWIFT_PUBLIC_PROPERTY, false);
copyBool(conf, prefix + DOT_LOCATION_AWARE, props,
SWIFT_LOCATION_AWARE_PROPERTY, false);
return props;
}
/**
* Extract a boolean value from the configuration and copy it to the
* properties instance.
* @param conf source configuration
* @param confKey key in the configuration file
* @param props destination property set
* @param propsKey key in the property set
* @param defVal default value
*/
private static void copyBool(Configuration conf,
String confKey,
Properties props,
String propsKey,
boolean defVal) {
boolean b = conf.getBoolean(confKey, defVal);
props.setProperty(propsKey, Boolean.toString(b));
}
private static void set(Properties props, String key, String optVal) {
if (optVal != null) {
props.setProperty(key, optVal);
}
}
/**
* Copy a (trimmed) property from the configuration file to the properties file.
* <p/>
* If marked as required and not found in the configuration, an
* exception is raised.
* If not required -and missing- then the property will not be set.
* In this case, if the property is already in the Properties instance,
* it will remain untouched.
*
* @param conf source configuration
* @param confKey key in the configuration file
* @param props destination property set
* @param propsKey key in the property set
* @param required is the property required
* @throws SwiftConfigurationException if the property is required but was
* not found in the configuration instance.
*/
public static void copy(Configuration conf, String confKey, Properties props,
String propsKey,
boolean required) throws SwiftConfigurationException {
//TODO: replace. version compatibility issue conf.getTrimmed fails with NoSuchMethodError
String val = conf.get(confKey);
if (val != null) {
val = val.trim();
}
if (required && val == null) {
throw new SwiftConfigurationException(
"Missing mandatory configuration option: "
+
confKey);
}
set(props, propsKey, val);
}
}

View File

@ -1,275 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.http;
import org.apache.hadoop.util.VersionInfo;
/**
* Constants used in the Swift REST protocol,
* and in the properties used to configure the {@link SwiftRestClient}.
*/
public class SwiftProtocolConstants {
/**
* Swift-specific header for authentication: {@value}
*/
public static final String HEADER_AUTH_KEY = "X-Auth-Token";
/**
* Default port used by Swift for HTTP
*/
public static final int SWIFT_HTTP_PORT = 8080;
/**
* Default port used by Swift Auth for HTTPS
*/
public static final int SWIFT_HTTPS_PORT = 443;
/** HTTP standard {@value} header */
public static final String HEADER_RANGE = "Range";
/** HTTP standard {@value} header */
public static final String HEADER_DESTINATION = "Destination";
/** HTTP standard {@value} header */
public static final String HEADER_LAST_MODIFIED = "Last-Modified";
/** HTTP standard {@value} header */
public static final String HEADER_CONTENT_LENGTH = "Content-Length";
/** HTTP standard {@value} header */
public static final String HEADER_CONTENT_RANGE = "Content-Range";
/**
* Patten for range headers
*/
public static final String SWIFT_RANGE_HEADER_FORMAT_PATTERN = "bytes=%d-%d";
/**
* section in the JSON catalog provided after auth listing the swift FS:
* {@value}
*/
public static final String SERVICE_CATALOG_SWIFT = "swift";
/**
* section in the JSON catalog provided after auth listing the cloudfiles;
* this is an alternate catalog entry name
* {@value}
*/
public static final String SERVICE_CATALOG_CLOUD_FILES = "cloudFiles";
/**
* section in the JSON catalog provided after auth listing the object store;
* this is an alternate catalog entry name
* {@value}
*/
public static final String SERVICE_CATALOG_OBJECT_STORE = "object-store";
/**
* Swift-specific header: object manifest used in the final upload
* of a multipart operation: {@value}
*/
public static final String X_OBJECT_MANIFEST = "X-Object-Manifest";
/**
* Swift-specific header -#of objects in a container: {@value}
*/
public static final String X_CONTAINER_OBJECT_COUNT =
"X-Container-Object-Count";
/**
* Swift-specific header: no. of bytes used in a container {@value}
*/
public static final String X_CONTAINER_BYTES_USED = "X-Container-Bytes-Used";
/**
* Header to set when requesting the latest version of a file: : {@value}
*/
public static final String X_NEWEST = "X-Newest";
/**
* throttled response sent by some endpoints.
*/
public static final int SC_THROTTLED_498 = 498;
/**
* W3C recommended status code for throttled operations
*/
public static final int SC_TOO_MANY_REQUESTS_429 = 429;
public static final String FS_SWIFT = "fs.swift";
/**
* Prefix for all instance-specific values in the configuration: {@value}
*/
public static final String SWIFT_SERVICE_PREFIX = FS_SWIFT + ".service.";
/**
* timeout for all connections: {@value}
*/
public static final String SWIFT_CONNECTION_TIMEOUT =
FS_SWIFT + ".connect.timeout";
/**
* timeout for all connections: {@value}
*/
public static final String SWIFT_SOCKET_TIMEOUT =
FS_SWIFT + ".socket.timeout";
/**
* the default socket timeout in millis {@value}.
* This controls how long the connection waits for responses from
* servers.
*/
public static final int DEFAULT_SOCKET_TIMEOUT = 60000;
/**
* connection retry count for all connections: {@value}
*/
public static final String SWIFT_RETRY_COUNT =
FS_SWIFT + ".connect.retry.count";
/**
* delay in millis between bulk (delete, rename, copy operations: {@value}
*/
public static final String SWIFT_THROTTLE_DELAY =
FS_SWIFT + ".connect.throttle.delay";
/**
* the default throttle delay in millis {@value}
*/
public static final int DEFAULT_THROTTLE_DELAY = 0;
/**
* blocksize for all filesystems: {@value}
*/
public static final String SWIFT_BLOCKSIZE =
FS_SWIFT + ".blocksize";
/**
* the default blocksize for filesystems in KB: {@value}
*/
public static final int DEFAULT_SWIFT_BLOCKSIZE = 32 * 1024;
/**
* partition size for all filesystems in KB: {@value}
*/
public static final String SWIFT_PARTITION_SIZE =
FS_SWIFT + ".partsize";
/**
* The default partition size for uploads: {@value}
*/
public static final int DEFAULT_SWIFT_PARTITION_SIZE = 4608*1024;
/**
* request size for reads in KB: {@value}
*/
public static final String SWIFT_REQUEST_SIZE =
FS_SWIFT + ".requestsize";
/**
* The default reqeuest size for reads: {@value}
*/
public static final int DEFAULT_SWIFT_REQUEST_SIZE = 64;
public static final String HEADER_USER_AGENT="User-Agent";
/**
* The user agent sent in requests.
*/
public static final String SWIFT_USER_AGENT= "Apache Hadoop Swift Client "
+ VersionInfo.getBuildVersion();
/**
* Key for passing the service name as a property -not read from the
* configuration : {@value}
*/
public static final String DOT_SERVICE = ".SERVICE-NAME";
/**
* Key for passing the container name as a property -not read from the
* configuration : {@value}
*/
public static final String DOT_CONTAINER = ".CONTAINER-NAME";
public static final String DOT_AUTH_URL = ".auth.url";
public static final String DOT_AUTH_ENDPOINT_PREFIX = ".auth.endpoint.prefix";
public static final String DOT_TENANT = ".tenant";
public static final String DOT_CONTAINER_TENANT = ".container.tenant";
public static final String DOT_USERNAME = ".username";
public static final String DOT_PASSWORD = ".password";
public static final String DOT_TRUST_ID = ".trust.id";
public static final String DOT_DOMAIN_NAME = ".domain.name";
public static final String DOT_DOMAIN_ID = ".domain.id";
public static final String DOT_HTTP_PORT = ".http.port";
public static final String DOT_HTTPS_PORT = ".https.port";
public static final String DOT_REGION = ".region";
public static final String DOT_PROXY_HOST = ".proxy.host";
public static final String DOT_PROXY_PORT = ".proxy.port";
public static final String DOT_LOCATION_AWARE = ".location-aware";
public static final String DOT_APIKEY = ".apikey";
public static final String DOT_USE_APIKEY = ".useApikey";
/**
* flag to say use public URL
*/
public static final String DOT_PUBLIC = ".public";
public static final String SWIFT_SERVICE_PROPERTY = FS_SWIFT + DOT_SERVICE;
public static final String SWIFT_CONTAINER_PROPERTY = FS_SWIFT + DOT_CONTAINER;
public static final String SWIFT_AUTH_PROPERTY = FS_SWIFT + DOT_AUTH_URL;
public static final String SWIFT_AUTH_ENDPOINT_PREFIX =
FS_SWIFT + DOT_AUTH_ENDPOINT_PREFIX;
public static final String SWIFT_TENANT_PROPERTY = FS_SWIFT + DOT_TENANT;
public static final String SWIFT_CONTAINER_TENANT_PROPERTY = FS_SWIFT + DOT_CONTAINER_TENANT;
public static final String SWIFT_USERNAME_PROPERTY = FS_SWIFT + DOT_USERNAME;
public static final String SWIFT_PASSWORD_PROPERTY = FS_SWIFT + DOT_PASSWORD;
public static final String SWIFT_TRUST_ID_PROPERTY = FS_SWIFT + DOT_TRUST_ID;
public static final String SWIFT_DOMAIN_NAME_PROPERTY = FS_SWIFT + DOT_DOMAIN_NAME;
public static final String SWIFT_DOMAIN_ID_PROPERTY = FS_SWIFT + DOT_DOMAIN_ID;
public static final String SWIFT_APIKEY_PROPERTY = FS_SWIFT + DOT_APIKEY;
public static final String SWIFT_HTTP_PORT_PROPERTY = FS_SWIFT + DOT_HTTP_PORT;
public static final String SWIFT_HTTPS_PORT_PROPERTY = FS_SWIFT
+ DOT_HTTPS_PORT;
public static final String SWIFT_REGION_PROPERTY = FS_SWIFT + DOT_REGION;
public static final String SWIFT_PUBLIC_PROPERTY = FS_SWIFT + DOT_PUBLIC;
public static final String SWIFT_USE_API_KEY_PROPERTY = FS_SWIFT + DOT_USE_APIKEY;
public static final String SWIFT_LOCATION_AWARE_PROPERTY = FS_SWIFT +
DOT_LOCATION_AWARE;
public static final String SWIFT_PROXY_HOST_PROPERTY = FS_SWIFT + DOT_PROXY_HOST;
public static final String SWIFT_PROXY_PORT_PROPERTY = FS_SWIFT + DOT_PROXY_PORT;
public static final String HTTP_ROUTE_DEFAULT_PROXY =
"http.route.default-proxy";
/**
* Topology to return when a block location is requested
*/
public static final String TOPOLOGY_PATH = "/swift/unknown";
/**
* Block location to return when a block location is requested
*/
public static final String BLOCK_LOCATION = "/default-rack/swift";
/**
* Default number of attempts to retry a connect request: {@value}
*/
static final int DEFAULT_RETRY_COUNT = 3;
/**
* Default timeout in milliseconds for connection requests: {@value}
*/
static final int DEFAULT_CONNECT_TIMEOUT = 15000;
}

View File

@ -1,81 +0,0 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<!--
~ Licensed to the Apache Software Foundation (ASF) under one
~ or more contributor license agreements. See the NOTICE file
~ distributed with this work for additional information
~ regarding copyright ownership. The ASF licenses this file
~ to you under the Apache License, Version 2.0 (the
~ "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<html>
<head>
<title>Swift Filesystem Client for Apache Hadoop</title>
</head>
<body>
<h1>
Swift Filesystem Client for Apache Hadoop
</h1>
<h2>Introduction</h2>
<div>This package provides support in Apache Hadoop for the OpenStack Swift
Key-Value store, allowing client applications -including MR Jobs- to
read and write data in Swift.
</div>
<div>Design Goals</div>
<ol>
<li>Give clients access to SwiftFS files, similar to S3n:</li>
<li>maybe: support a Swift Block store -- at least until Swift's
support for &gt;5GB files has stabilized.
</li>
<li>Support for data-locality if the Swift FS provides file location information</li>
<li>Support access to multiple Swift filesystems in the same client/task.</li>
<li>Authenticate using the Keystone APIs.</li>
<li>Avoid dependency on unmaintained libraries.</li>
</ol>
<h2>Supporting multiple Swift Filesystems</h2>
The goal of supporting multiple swift filesystems simultaneously changes how
clusters are named and authenticated. In Hadoop's S3 and S3N filesystems, the "bucket" into
which objects are stored is directly named in the URL, such as
<code>s3n://bucket/object1</code>. The Hadoop configuration contains a
single set of login credentials for S3 (username and key), which are used to
authenticate the HTTP operations.
For swift, we need to know not only which "container" name, but which credentials
to use to authenticate with it and which URL to use for authentication.
This has led to a different design pattern from S3, as instead of simple bucket names,
the hostname of an S3 container is two-level, the name of the service provider
being the second path: <code>swift://bucket.service/</code>
The <code>service</code> portion of this domainame is used as a reference into
the client settings and so identify the service provider of that container.
<h2>Testing</h2>
<div>
The client code can be tested against public or private Swift instances; the
public services are (at the time of writing -January 2013-), Rackspace and
HP Cloud. Testing against both instances is how interoperability
can be verified.
</div>
</body>
</html>

View File

@ -1,47 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.snative;
import org.apache.hadoop.fs.BufferedFSInputStream;
import org.apache.hadoop.fs.FSInputStream;
import org.apache.hadoop.fs.swift.exceptions.SwiftConnectionClosedException;
import java.io.IOException;
/**
* Add stricter compliance with the evolving FS specifications
*/
public class StrictBufferedFSInputStream extends BufferedFSInputStream {
public StrictBufferedFSInputStream(FSInputStream in,
int size) {
super(in, size);
}
@Override
public void seek(long pos) throws IOException {
if (pos < 0) {
throw new IOException("Negative position");
}
if (in == null) {
throw new SwiftConnectionClosedException("Stream closed");
}
super.seek(pos);
}
}

View File

@ -1,119 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.snative;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.permission.FsPermission;
import org.apache.hadoop.fs.swift.util.SwiftObjectPath;
/**
* A subclass of {@link FileStatus} that contains the
* Swift-specific meta-data (e.g. DLO)
*/
public class SwiftFileStatus extends FileStatus {
private SwiftObjectPath dloPrefix = null;
private boolean isPseudoDirFlag = false;
public SwiftFileStatus() {
}
public SwiftFileStatus(long length,
boolean isdir,
int block_replication,
long blocksize, long modification_time, Path path) {
this(length, isdir, block_replication, blocksize, modification_time,
path, null);
}
public SwiftFileStatus(long length,
boolean isdir,
int block_replication,
long blocksize, long modification_time, Path path,
SwiftObjectPath dloPrefix) {
super(length, isdir, block_replication, blocksize, modification_time, path);
this.dloPrefix = dloPrefix;
}
public SwiftFileStatus(long length,
boolean isdir,
int block_replication,
long blocksize,
long modification_time,
long access_time,
FsPermission permission,
String owner, String group, Path path) {
super(length, isdir, block_replication, blocksize, modification_time,
access_time, permission, owner, group, path);
}
public static SwiftFileStatus createPseudoDirStatus(Path path) {
SwiftFileStatus status = new SwiftFileStatus(0, true, 1, 0,
System.currentTimeMillis(),
path);
status.isPseudoDirFlag = true;
return status;
}
/**
* A entry is a file if it is not a directory.
* By implementing it <i>and not marking as an override</i> this
* subclass builds and runs in both Hadoop versions.
* @return the opposite value to {@link #isDir()}
*/
public boolean isFile() {
return !isDir();
}
/**
* Directory test
* @return true if the file is considered to be a directory
*/
public boolean isDirectory() {
return isDir();
}
public boolean isDLO() {
return dloPrefix != null;
}
public SwiftObjectPath getDLOPrefix() {
return dloPrefix;
}
public boolean isPseudoDir() {
return isPseudoDirFlag;
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder();
sb.append(getClass().getSimpleName());
sb.append("{ ");
sb.append("path=").append(getPath());
sb.append("; isDirectory=").append(isDirectory());
sb.append("; length=").append(getLen());
sb.append("; blocksize=").append(getBlockSize());
sb.append("; modification_time=").append(getModificationTime());
sb.append("}");
return sb.toString();
}
}

View File

@ -1,711 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.snative;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.BlockLocation;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.permission.FsPermission;
import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException;
import org.apache.hadoop.fs.swift.exceptions.SwiftNotDirectoryException;
import org.apache.hadoop.fs.swift.exceptions.SwiftOperationFailedException;
import org.apache.hadoop.fs.swift.exceptions.SwiftPathExistsException;
import org.apache.hadoop.fs.swift.exceptions.SwiftUnsupportedFeatureException;
import org.apache.hadoop.fs.swift.http.SwiftProtocolConstants;
import org.apache.hadoop.fs.swift.util.DurationStats;
import org.apache.hadoop.fs.swift.util.SwiftObjectPath;
import org.apache.hadoop.fs.swift.util.SwiftUtils;
import org.apache.hadoop.util.Progressable;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.OutputStream;
import java.net.URI;
import java.util.ArrayList;
import java.util.List;
/**
* Swift file system implementation. Extends Hadoop FileSystem
*/
public class SwiftNativeFileSystem extends FileSystem {
/** filesystem prefix: {@value} */
public static final String SWIFT = "swift";
private static final Log LOG =
LogFactory.getLog(SwiftNativeFileSystem.class);
/**
* path to user work directory for storing temporary files
*/
private Path workingDir;
/**
* Swift URI
*/
private URI uri;
/**
* reference to swiftFileSystemStore
*/
private SwiftNativeFileSystemStore store;
/**
* Default constructor for Hadoop
*/
public SwiftNativeFileSystem() {
// set client in initialize()
}
/**
* This constructor used for testing purposes
*/
public SwiftNativeFileSystem(SwiftNativeFileSystemStore store) {
this.store = store;
}
/**
* This is for testing
* @return the inner store class
*/
public SwiftNativeFileSystemStore getStore() {
return store;
}
// @Override
public String getScheme() {
return SWIFT;
}
/**
* default class initialization
*
* @param fsuri path to Swift
* @param conf Hadoop configuration
* @throws IOException
*/
@Override
public void initialize(URI fsuri, Configuration conf) throws IOException {
super.initialize(fsuri, conf);
setConf(conf);
if (store == null) {
store = new SwiftNativeFileSystemStore();
}
this.uri = fsuri;
String username = System.getProperty("user.name");
this.workingDir = new Path("/user", username)
.makeQualified(uri, new Path(username));
if (LOG.isDebugEnabled()) {
LOG.debug("Initializing SwiftNativeFileSystem against URI " + uri
+ " and working dir " + workingDir);
}
store.initialize(uri, conf);
LOG.debug("SwiftFileSystem initialized");
}
/**
* @return path to Swift
*/
@Override
public URI getUri() {
return uri;
}
@Override
public String toString() {
return "Swift FileSystem " + store;
}
/**
* Path to user working directory
*
* @return Hadoop path
*/
@Override
public Path getWorkingDirectory() {
return workingDir;
}
@Override
public String getCanonicalServiceName() {
return null;
}
/**
* @param dir user working directory
*/
@Override
public void setWorkingDirectory(Path dir) {
workingDir = makeAbsolute(dir);
if (LOG.isDebugEnabled()) {
LOG.debug("SwiftFileSystem.setWorkingDirectory to " + dir);
}
}
/**
* Return a file status object that represents the path.
*
* @param path The path we want information from
* @return a FileStatus object
*/
@Override
public FileStatus getFileStatus(Path path) throws IOException {
Path absolutePath = makeAbsolute(path);
return store.getObjectMetadata(absolutePath);
}
/**
* The blocksize of this filesystem is set by the property
* SwiftProtocolConstants.SWIFT_BLOCKSIZE;the default is the value of
* SwiftProtocolConstants.DEFAULT_SWIFT_BLOCKSIZE;
* @return the blocksize for this FS.
*/
@Override
public long getDefaultBlockSize() {
return store.getBlocksize();
}
/**
* The blocksize for this filesystem.
* @see #getDefaultBlockSize()
* @param f path of file
* @return the blocksize for the path
*/
@Override
public long getDefaultBlockSize(Path f) {
return store.getBlocksize();
}
@Override
public long getBlockSize(Path path) throws IOException {
return store.getBlocksize();
}
/**
* Return an array containing hostnames, offset and size of
* portions of the given file. For a nonexistent
* file or regions, null will be returned.
* <p/>
* This call is most helpful with DFS, where it returns
* hostnames of machines that contain the given file.
* <p/>
* The FileSystem will simply return an elt containing 'localhost'.
*/
@Override
public BlockLocation[] getFileBlockLocations(FileStatus file,
long start,
long len) throws IOException {
//argument checks
if (file == null) {
return null;
}
if (file.isDir()) {
return new BlockLocation[0];
}
if (start < 0 || len < 0) {
throw new IllegalArgumentException("Negative start or len parameter" +
" to getFileBlockLocations");
}
if (file.getLen() <= start) {
return new BlockLocation[0];
}
// Check if requested file in Swift is more than 5Gb. In this case
// each block has its own location -which may be determinable
// from the Swift client API, depending on the remote server
final FileStatus[] listOfFileBlocks;
if (file instanceof SwiftFileStatus && ((SwiftFileStatus)file).isDLO()) {
listOfFileBlocks = store.listSegments(file, true);
} else {
listOfFileBlocks = null;
}
List<URI> locations = new ArrayList<URI>();
if (listOfFileBlocks != null && listOfFileBlocks.length > 1) {
for (FileStatus fileStatus : listOfFileBlocks) {
if (SwiftObjectPath.fromPath(uri, fileStatus.getPath())
.equals(SwiftObjectPath.fromPath(uri, file.getPath()))) {
continue;
}
locations.addAll(store.getObjectLocation(fileStatus.getPath()));
}
} else {
locations = store.getObjectLocation(file.getPath());
}
if (locations.isEmpty()) {
LOG.debug("No locations returned for " + file.getPath());
//no locations were returned for the object
//fall back to the superclass
String[] name = {SwiftProtocolConstants.BLOCK_LOCATION};
String[] host = { "localhost" };
String[] topology={SwiftProtocolConstants.TOPOLOGY_PATH};
return new BlockLocation[] {
new BlockLocation(name, host, topology,0, file.getLen())
};
}
final String[] names = new String[locations.size()];
final String[] hosts = new String[locations.size()];
int i = 0;
for (URI location : locations) {
hosts[i] = location.getHost();
names[i] = location.getAuthority();
i++;
}
return new BlockLocation[]{
new BlockLocation(names, hosts, 0, file.getLen())
};
}
/**
* Create the parent directories.
* As an optimization, the entire hierarchy of parent
* directories is <i>Not</i> polled. Instead
* the tree is walked up from the last to the first,
* creating directories until one that exists is found.
*
* This strategy means if a file is created in an existing directory,
* one quick poll sufficies.
*
* There is a big assumption here: that all parent directories of an existing
* directory also exists.
* @param path path to create.
* @param permission to apply to files
* @return true if the operation was successful
* @throws IOException on a problem
*/
@Override
public boolean mkdirs(Path path, FsPermission permission) throws IOException {
if (LOG.isDebugEnabled()) {
LOG.debug("SwiftFileSystem.mkdirs: " + path);
}
Path directory = makeAbsolute(path);
//build a list of paths to create
List<Path> paths = new ArrayList<Path>();
while (shouldCreate(directory)) {
//this directory needs creation, add to the list
paths.add(0, directory);
//now see if the parent needs to be created
directory = directory.getParent();
}
//go through the list of directories to create
for (Path p : paths) {
if (isNotRoot(p)) {
//perform a mkdir operation without any polling of
//the far end first
forceMkdir(p);
}
}
//if an exception was not thrown, this operation is considered
//a success
return true;
}
private boolean isNotRoot(Path absolutePath) {
return !isRoot(absolutePath);
}
private boolean isRoot(Path absolutePath) {
return absolutePath.getParent() == null;
}
/**
* internal implementation of directory creation.
*
* @param path path to file
* @return boolean file is created; false: no need to create
* @throws IOException if specified path is file instead of directory
*/
private boolean mkdir(Path path) throws IOException {
Path directory = makeAbsolute(path);
boolean shouldCreate = shouldCreate(directory);
if (shouldCreate) {
forceMkdir(directory);
}
return shouldCreate;
}
/**
* Should mkdir create this directory?
* If the directory is root : false
* If the entry exists and is a directory: false
* If the entry exists and is a file: exception
* else: true
* @param directory path to query
* @return true iff the directory should be created
* @throws IOException IO problems
* @throws SwiftNotDirectoryException if the path references a file
*/
private boolean shouldCreate(Path directory) throws IOException {
FileStatus fileStatus;
boolean shouldCreate;
if (isRoot(directory)) {
//its the base dir, bail out immediately
return false;
}
try {
//find out about the path
fileStatus = getFileStatus(directory);
if (!fileStatus.isDir()) {
//if it's a file, raise an error
throw new SwiftNotDirectoryException(directory,
String.format(": can't mkdir since it exists and is not a directory: %s",
fileStatus));
} else {
//path exists, and it is a directory
if (LOG.isDebugEnabled()) {
LOG.debug("skipping mkdir(" + directory + ") as it exists already");
}
shouldCreate = false;
}
} catch (FileNotFoundException e) {
shouldCreate = true;
}
return shouldCreate;
}
/**
* mkdir of a directory -irrespective of what was there underneath.
* There are no checks for the directory existing, there not
* being a path there, etc. etc. Those are assumed to have
* taken place already
* @param absolutePath path to create
* @throws IOException IO problems
*/
private void forceMkdir(Path absolutePath) throws IOException {
if (LOG.isDebugEnabled()) {
LOG.debug("Making dir '" + absolutePath + "' in Swift");
}
//file is not found: it must be created
store.createDirectory(absolutePath);
}
/**
* List the statuses of the files/directories in the given path if the path is
* a directory.
*
* @param path given path
* @return the statuses of the files/directories in the given path
* @throws IOException
*/
@Override
public FileStatus[] listStatus(Path path) throws IOException {
if (LOG.isDebugEnabled()) {
LOG.debug("SwiftFileSystem.listStatus for: " + path);
}
Path absolutePath = makeAbsolute(path);
FileStatus status = getFileStatus(absolutePath);
if (status.isDir()) {
return store.listSubPaths(absolutePath, false, true);
} else {
return new FileStatus[] {status};
}
}
/**
* This optional operation is not supported
*/
@Override
public FSDataOutputStream append(Path f, int bufferSize, Progressable progress)
throws IOException {
LOG.debug("SwiftFileSystem.append");
throw new SwiftUnsupportedFeatureException("Not supported: append()");
}
/**
* @param permission Currently ignored.
*/
@Override
public FSDataOutputStream create(Path file, FsPermission permission,
boolean overwrite, int bufferSize,
short replication, long blockSize,
Progressable progress)
throws IOException {
LOG.debug("SwiftFileSystem.create");
FileStatus fileStatus = null;
Path absolutePath = makeAbsolute(file);
try {
fileStatus = getFileStatus(absolutePath);
} catch (FileNotFoundException e) {
//the file isn't there.
}
if (fileStatus != null) {
//the path exists -action depends on whether or not it is a directory,
//and what the overwrite policy is.
//What is clear at this point is that if the entry exists, there's
//no need to bother creating any parent entries
if (fileStatus.isDir()) {
//here someone is trying to create a file over a directory
/* we can't throw an exception here as there is no easy way to distinguish
a file from the dir
throw new SwiftPathExistsException("Cannot create a file over a directory:"
+ file);
*/
if (LOG.isDebugEnabled()) {
LOG.debug("Overwriting either an empty file or a directory");
}
}
if (overwrite) {
//overwrite set -> delete the object.
store.delete(absolutePath, true);
} else {
throw new SwiftPathExistsException("Path exists: " + file);
}
} else {
// destination does not exist -trigger creation of the parent
Path parent = file.getParent();
if (parent != null) {
if (!mkdirs(parent)) {
throw new SwiftOperationFailedException(
"Mkdirs failed to create " + parent);
}
}
}
SwiftNativeOutputStream out = createSwiftOutputStream(file);
return new FSDataOutputStream(out, statistics);
}
/**
* Create the swift output stream
* @param path path to write to
* @return the new file
* @throws IOException
*/
protected SwiftNativeOutputStream createSwiftOutputStream(Path path) throws
IOException {
long partSizeKB = getStore().getPartsizeKB();
return new SwiftNativeOutputStream(getConf(),
getStore(),
path.toUri().toString(),
partSizeKB);
}
/**
* Opens an FSDataInputStream at the indicated Path.
*
* @param path the file name to open
* @param bufferSize the size of the buffer to be used.
* @return the input stream
* @throws FileNotFoundException if the file is not found
* @throws IOException any IO problem
*/
@Override
public FSDataInputStream open(Path path, int bufferSize) throws IOException {
int bufferSizeKB = getStore().getBufferSizeKB();
long readBlockSize = bufferSizeKB * 1024L;
return open(path, bufferSize, readBlockSize);
}
/**
* Low-level operation to also set the block size for this operation
* @param path the file name to open
* @param bufferSize the size of the buffer to be used.
* @param readBlockSize how big should the read blockk/buffer size be?
* @return the input stream
* @throws FileNotFoundException if the file is not found
* @throws IOException any IO problem
*/
public FSDataInputStream open(Path path,
int bufferSize,
long readBlockSize) throws IOException {
if (readBlockSize <= 0) {
throw new SwiftConfigurationException("Bad remote buffer size");
}
Path absolutePath = makeAbsolute(path);
return new FSDataInputStream(
new StrictBufferedFSInputStream(
new SwiftNativeInputStream(store,
statistics,
absolutePath,
readBlockSize),
bufferSize));
}
/**
* Renames Path src to Path dst. On swift this uses copy-and-delete
* and <i>is not atomic</i>.
*
* @param src path
* @param dst path
* @return true if directory renamed, false otherwise
* @throws IOException on problems
*/
@Override
public boolean rename(Path src, Path dst) throws IOException {
try {
store.rename(makeAbsolute(src), makeAbsolute(dst));
//success
return true;
} catch (SwiftOperationFailedException e) {
//downgrade to a failure
return false;
} catch (FileNotFoundException e) {
//downgrade to a failure
return false;
}
}
/**
* Delete a file or directory
*
* @param path the path to delete.
* @param recursive if path is a directory and set to
* true, the directory is deleted else throws an exception if the
* directory is not empty
* case of a file the recursive can be set to either true or false.
* @return true if the object was deleted
* @throws IOException IO problems
*/
@Override
public boolean delete(Path path, boolean recursive) throws IOException {
try {
return store.delete(path, recursive);
} catch (FileNotFoundException e) {
//base path was not found.
return false;
}
}
/**
* Delete a file.
* This method is abstract in Hadoop 1.x; in 2.x+ it is non-abstract
* and deprecated
*/
@Override
public boolean delete(Path f) throws IOException {
return delete(f, true);
}
/**
* Makes path absolute
*
* @param path path to file
* @return absolute path
*/
protected Path makeAbsolute(Path path) {
if (path.isAbsolute()) {
return path;
}
return new Path(workingDir, path);
}
/**
* Get the current operation statistics
* @return a snapshot of the statistics
*/
public List<DurationStats> getOperationStatistics() {
return store.getOperationStatistics();
}
/**
* Low level method to do a deep listing of all entries, not stopping
* at the next directory entry. This is to let tests be confident that
* recursive deletes &c really are working.
* @param path path to recurse down
* @param newest ask for the newest data, potentially slower than not.
* @return a potentially empty array of file status
* @throws IOException any problem
*/
@InterfaceAudience.Private
public FileStatus[] listRawFileStatus(Path path, boolean newest) throws IOException {
return store.listSubPaths(makeAbsolute(path), true, newest);
}
/**
* Get the number of partitions written by an output stream
* This is for testing
* @param outputStream output stream
* @return the #of partitions written by that stream
*/
@InterfaceAudience.Private
public static int getPartitionsWritten(FSDataOutputStream outputStream) {
SwiftNativeOutputStream snos = getSwiftNativeOutputStream(outputStream);
return snos.getPartitionsWritten();
}
private static SwiftNativeOutputStream getSwiftNativeOutputStream(
FSDataOutputStream outputStream) {
OutputStream wrappedStream = outputStream.getWrappedStream();
return (SwiftNativeOutputStream) wrappedStream;
}
/**
* Get the size of partitions written by an output stream
* This is for testing
*
* @param outputStream output stream
* @return partition size in bytes
*/
@InterfaceAudience.Private
public static long getPartitionSize(FSDataOutputStream outputStream) {
SwiftNativeOutputStream snos = getSwiftNativeOutputStream(outputStream);
return snos.getFilePartSize();
}
/**
* Get the the number of bytes written to an output stream
* This is for testing
*
* @param outputStream output stream
* @return partition size in bytes
*/
@InterfaceAudience.Private
public static long getBytesWritten(FSDataOutputStream outputStream) {
SwiftNativeOutputStream snos = getSwiftNativeOutputStream(outputStream);
return snos.getBytesWritten();
}
/**
* Get the the number of bytes uploaded by an output stream
* to the swift cluster.
* This is for testing
*
* @param outputStream output stream
* @return partition size in bytes
*/
@InterfaceAudience.Private
public static long getBytesUploaded(FSDataOutputStream outputStream) {
SwiftNativeOutputStream snos = getSwiftNativeOutputStream(outputStream);
return snos.getBytesUploaded();
}
}

View File

@ -1,401 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.snative;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.fs.FSInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.swift.exceptions.SwiftConnectionClosedException;
import org.apache.hadoop.fs.swift.exceptions.SwiftException;
import org.apache.hadoop.fs.swift.http.HttpBodyContent;
import org.apache.hadoop.fs.swift.http.HttpInputStreamWithRelease;
import org.apache.hadoop.fs.swift.util.SwiftUtils;
import java.io.EOFException;
import java.io.IOException;
/**
* The input stream from remote Swift blobs.
* The class attempts to be buffer aware, and react to a forward seek operation
* by trying to scan ahead through the current block of data to find it.
* This accelerates some operations that do a lot of seek()/read() actions,
* including work (such as in the MR engine) that do a seek() immediately after
* an open().
*/
class SwiftNativeInputStream extends FSInputStream {
private static final Log LOG = LogFactory.getLog(SwiftNativeInputStream.class);
/**
* range requested off the server: {@value}
*/
private final long bufferSize;
/**
* File nativeStore instance
*/
private final SwiftNativeFileSystemStore nativeStore;
/**
* Hadoop statistics. Used to get info about number of reads, writes, etc.
*/
private final FileSystem.Statistics statistics;
/**
* Data input stream
*/
private HttpInputStreamWithRelease httpStream;
/**
* File path
*/
private final Path path;
/**
* Current position
*/
private long pos = 0;
/**
* Length of the file picked up at start time
*/
private long contentLength = -1;
/**
* Why the stream is closed
*/
private String reasonClosed = "unopened";
/**
* Offset in the range requested last
*/
private long rangeOffset = 0;
private long nextReadPosition = 0;
public SwiftNativeInputStream(SwiftNativeFileSystemStore storeNative,
FileSystem.Statistics statistics, Path path, long bufferSize)
throws IOException {
this.nativeStore = storeNative;
this.statistics = statistics;
this.path = path;
if (bufferSize <= 0) {
throw new IllegalArgumentException("Invalid buffer size");
}
this.bufferSize = bufferSize;
//initial buffer fill
this.httpStream = storeNative.getObject(path).getInputStream();
//fillBuffer(0);
}
/**
* Move to a new position within the file relative to where the pointer is now.
* Always call from a synchronized clause
* @param offset offset
*/
private synchronized void incPos(int offset) {
pos += offset;
rangeOffset += offset;
SwiftUtils.trace(LOG, "Inc: pos=%d bufferOffset=%d", pos, rangeOffset);
}
/**
* Update the start of the buffer; always call from a sync'd clause
* @param seekPos position sought.
* @param contentLength content length provided by response (may be -1)
*/
private synchronized void updateStartOfBufferPosition(long seekPos,
long contentLength) {
//reset the seek pointer
pos = seekPos;
//and put the buffer offset to 0
rangeOffset = 0;
this.contentLength = contentLength;
SwiftUtils.trace(LOG, "Move: pos=%d; bufferOffset=%d; contentLength=%d",
pos,
rangeOffset,
contentLength);
}
@Override
public synchronized int read() throws IOException {
verifyOpen();
int result = -1;
try {
seekStream();
result = httpStream.read();
} catch (IOException e) {
String msg = "IOException while reading " + path
+ ": ' +e, attempting to reopen.";
LOG.debug(msg, e);
if (reopenBuffer()) {
result = httpStream.read();
}
}
if (result != -1) {
incPos(1);
}
if (statistics != null && result != -1) {
statistics.incrementBytesRead(1);
}
return result;
}
@Override
public synchronized int read(byte[] b, int off, int len) throws IOException {
SwiftUtils.debug(LOG, "read(buffer, %d, %d)", off, len);
SwiftUtils.validateReadArgs(b, off, len);
int result = -1;
try {
verifyOpen();
result = httpStream.read(b, off, len);
} catch (IOException e) {
//other IO problems are viewed as transient and re-attempted
LOG.info("Received IOException while reading '" + path +
"', attempting to reopen: " + e);
LOG.debug("IOE on read()" + e, e);
if (reopenBuffer()) {
result = httpStream.read(b, off, len);
}
}
if (result > 0) {
incPos(result);
if (statistics != null) {
statistics.incrementBytesRead(result);
}
}
return result;
}
/**
* Re-open the buffer
* @return true iff more data could be added to the buffer
* @throws IOException if not
*/
private boolean reopenBuffer() throws IOException {
innerClose("reopening buffer to trigger refresh");
boolean success = false;
try {
fillBuffer(pos);
success = true;
} catch (EOFException eof) {
//the EOF has been reached
this.reasonClosed = "End of file";
}
return success;
}
/**
* close the stream. After this the stream is not usable -unless and until
* it is re-opened (which can happen on some of the buffer ops)
* This method is thread-safe and idempotent.
*
* @throws IOException on IO problems.
*/
@Override
public synchronized void close() throws IOException {
innerClose("closed");
}
private void innerClose(String reason) throws IOException {
try {
if (httpStream != null) {
reasonClosed = reason;
if (LOG.isDebugEnabled()) {
LOG.debug("Closing HTTP input stream : " + reason);
}
httpStream.close();
}
} finally {
httpStream = null;
}
}
/**
* Assume that the connection is not closed: throws an exception if it is
* @throws SwiftConnectionClosedException
*/
private void verifyOpen() throws SwiftConnectionClosedException {
if (httpStream == null) {
throw new SwiftConnectionClosedException(reasonClosed);
}
}
@Override
public synchronized String toString() {
return "SwiftNativeInputStream" +
" position=" + pos
+ " buffer size = " + bufferSize
+ " "
+ (httpStream != null ? httpStream.toString()
: (" no input stream: " + reasonClosed));
}
/**
* Treats any finalize() call without the input stream being closed
* as a serious problem, logging at error level
* @throws Throwable n/a
*/
@Override
protected void finalize() throws Throwable {
if (httpStream != null) {
LOG.error(
"Input stream is leaking handles by not being closed() properly: "
+ httpStream.toString());
}
}
/**
* Read through the specified number of bytes.
* The implementation iterates a byte a time, which may seem inefficient
* compared to the read(bytes[]) method offered by input streams.
* However, if you look at the code that implements that method, it comes
* down to read() one char at a time -only here the return value is discarded.
*
*<p/>
* This is a no-op if the stream is closed
* @param bytes number of bytes to read.
* @throws IOException IO problems
* @throws SwiftException if a read returned -1.
*/
private int chompBytes(long bytes) throws IOException {
int count = 0;
if (httpStream != null) {
int result;
for (long i = 0; i < bytes; i++) {
result = httpStream.read();
if (result < 0) {
throw new SwiftException("Received error code while chomping input");
}
count ++;
incPos(1);
}
}
return count;
}
/**
* Seek to an offset. If the data is already in the buffer, move to it
* @param targetPos target position
* @throws IOException on any problem
*/
@Override
public synchronized void seek(long targetPos) throws IOException {
if (targetPos < 0) {
throw new IOException("Negative Seek offset not supported");
}
nextReadPosition = targetPos;
}
public synchronized void realSeek(long targetPos) throws IOException {
if (targetPos < 0) {
throw new IOException("Negative Seek offset not supported");
}
//there's some special handling of near-local data
//as the seek can be omitted if it is in/adjacent
long offset = targetPos - pos;
if (LOG.isDebugEnabled()) {
LOG.debug("Seek to " + targetPos + "; current pos =" + pos
+ "; offset="+offset);
}
if (offset == 0) {
LOG.debug("seek is no-op");
return;
}
if (offset < 0) {
LOG.debug("seek is backwards");
} else if ((rangeOffset + offset < bufferSize)) {
//if the seek is in range of that requested, scan forwards
//instead of closing and re-opening a new HTTP connection
SwiftUtils.debug(LOG,
"seek is within current stream"
+ "; pos= %d ; targetPos=%d; "
+ "offset= %d ; bufferOffset=%d",
pos, targetPos, offset, rangeOffset);
try {
LOG.debug("chomping ");
chompBytes(offset);
} catch (IOException e) {
//this is assumed to be recoverable with a seek -or more likely to fail
LOG.debug("while chomping ",e);
}
if (targetPos - pos == 0) {
LOG.trace("chomping successful");
return;
}
LOG.trace("chomping failed");
} else {
if (LOG.isDebugEnabled()) {
LOG.debug("Seek is beyond buffer size of " + bufferSize);
}
}
innerClose("seeking to " + targetPos);
fillBuffer(targetPos);
}
/**
* Lazy seek.
* @throws IOException
*/
private void seekStream() throws IOException {
if (httpStream != null && nextReadPosition == pos) {
// already at specified position
return;
}
realSeek(nextReadPosition);
}
/**
* Fill the buffer from the target position
* If the target position == current position, the
* read still goes ahead; this is a way of handling partial read failures
* @param targetPos target position
* @throws IOException IO problems on the read
*/
private void fillBuffer(long targetPos) throws IOException {
long length = targetPos + bufferSize;
SwiftUtils.debug(LOG, "Fetching %d bytes starting at %d", length, targetPos);
HttpBodyContent blob = nativeStore.getObject(path, targetPos, length);
httpStream = blob.getInputStream();
updateStartOfBufferPosition(targetPos, blob.getContentLength());
}
@Override
public synchronized long getPos() throws IOException {
return pos;
}
/**
* This FS doesn't explicitly support multiple data sources, so
* return false here.
* @param targetPos the desired target position
* @return true if a new source of the data has been set up
* as the source of future reads
* @throws IOException IO problems
*/
@Override
public boolean seekToNewSource(long targetPos) throws IOException {
return false;
}
}

View File

@ -1,388 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.snative;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.swift.exceptions.SwiftException;
import org.apache.hadoop.fs.swift.exceptions.SwiftInternalStateException;
import org.apache.hadoop.fs.swift.util.SwiftUtils;
import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStream;
/**
* Output stream, buffers data on local disk.
* Writes to Swift on the close() method, unless the
* file is significantly large that it is being written as partitions.
* In this case, the first partition is written on the first write that puts
* data over the partition, as may later writes. The close() then causes
* the final partition to be written, along with a partition manifest.
*/
class SwiftNativeOutputStream extends OutputStream {
public static final int ATTEMPT_LIMIT = 3;
private long filePartSize;
private static final Log LOG =
LogFactory.getLog(SwiftNativeOutputStream.class);
private Configuration conf;
private String key;
private File backupFile;
private OutputStream backupStream;
private SwiftNativeFileSystemStore nativeStore;
private boolean closed;
private int partNumber;
private long blockOffset;
private long bytesWritten;
private long bytesUploaded;
private boolean partUpload = false;
final byte[] oneByte = new byte[1];
/**
* Create an output stream
* @param conf configuration to use
* @param nativeStore native store to write through
* @param key the key to write
* @param partSizeKB the partition size
* @throws IOException
*/
@SuppressWarnings("IOResourceOpenedButNotSafelyClosed")
public SwiftNativeOutputStream(Configuration conf,
SwiftNativeFileSystemStore nativeStore,
String key,
long partSizeKB) throws IOException {
this.conf = conf;
this.key = key;
this.backupFile = newBackupFile();
this.nativeStore = nativeStore;
this.backupStream = new BufferedOutputStream(new FileOutputStream(backupFile));
this.partNumber = 1;
this.blockOffset = 0;
this.filePartSize = 1024L * partSizeKB;
}
private File newBackupFile() throws IOException {
File dir = new File(conf.get("hadoop.tmp.dir"));
if (!dir.mkdirs() && !dir.exists()) {
throw new SwiftException("Cannot create Swift buffer directory: " + dir);
}
File result = File.createTempFile("output-", ".tmp", dir);
result.deleteOnExit();
return result;
}
/**
* Flush the local backing stream.
* This does not trigger a flush of data to the remote blobstore.
* @throws IOException
*/
@Override
public void flush() throws IOException {
backupStream.flush();
}
/**
* check that the output stream is open
*
* @throws SwiftException if it is not
*/
private synchronized void verifyOpen() throws SwiftException {
if (closed) {
throw new SwiftException("Output stream is closed");
}
}
/**
* Close the stream. This will trigger the upload of all locally cached
* data to the remote blobstore.
* @throws IOException IO problems uploading the data.
*/
@Override
public synchronized void close() throws IOException {
if (closed) {
return;
}
try {
closed = true;
//formally declare as closed.
backupStream.close();
backupStream = null;
Path keypath = new Path(key);
if (partUpload) {
partUpload(true);
nativeStore.createManifestForPartUpload(keypath);
} else {
uploadOnClose(keypath);
}
} finally {
delete(backupFile);
backupFile = null;
}
assert backupStream == null: "backup stream has been reopened";
}
/**
* Upload a file when closed, either in one go, or, if the file is
* already partitioned, by uploading the remaining partition and a manifest.
* @param keypath key as a path
* @throws IOException IO Problems
*/
private void uploadOnClose(Path keypath) throws IOException {
boolean uploadSuccess = false;
int attempt = 0;
while (!uploadSuccess) {
try {
++attempt;
bytesUploaded += uploadFileAttempt(keypath, attempt);
uploadSuccess = true;
} catch (IOException e) {
LOG.info("Upload failed " + e, e);
if (attempt > ATTEMPT_LIMIT) {
throw e;
}
}
}
}
@SuppressWarnings("IOResourceOpenedButNotSafelyClosed")
private long uploadFileAttempt(Path keypath, int attempt) throws IOException {
long uploadLen = backupFile.length();
SwiftUtils.debug(LOG, "Closing write of file %s;" +
" localfile=%s of length %d - attempt %d",
key,
backupFile,
uploadLen,
attempt);
nativeStore.uploadFile(keypath,
new FileInputStream(backupFile),
uploadLen);
return uploadLen;
}
@Override
protected void finalize() throws Throwable {
if(!closed) {
LOG.warn("stream not closed");
}
if (backupFile != null) {
LOG.warn("Leaking backing file " + backupFile);
}
}
private void delete(File file) {
if (file != null) {
SwiftUtils.debug(LOG, "deleting %s", file);
if (!file.delete()) {
LOG.warn("Could not delete " + file);
}
}
}
@Override
public void write(int b) throws IOException {
//insert to a one byte array
oneByte[0] = (byte) b;
//then delegate to the array writing routine
write(oneByte, 0, 1);
}
@Override
public synchronized void write(byte[] buffer, int offset, int len) throws
IOException {
//validate args
if (offset < 0 || len < 0 || (offset + len) > buffer.length) {
throw new IndexOutOfBoundsException("Invalid offset/length for write");
}
//validate the output stream
verifyOpen();
SwiftUtils.debug(LOG, " write(offset=%d, len=%d)", offset, len);
// if the size of file is greater than the partition limit
while (blockOffset + len >= filePartSize) {
// - then partition the blob and upload as many partitions
// are needed.
//how many bytes to write for this partition.
int subWriteLen = (int) (filePartSize - blockOffset);
if (subWriteLen < 0 || subWriteLen > len) {
throw new SwiftInternalStateException("Invalid subwrite len: "
+ subWriteLen
+ " -buffer len: " + len);
}
writeToBackupStream(buffer, offset, subWriteLen);
//move the offset along and length down
offset += subWriteLen;
len -= subWriteLen;
//now upload the partition that has just been filled up
// (this also sets blockOffset=0)
partUpload(false);
}
//any remaining data is now written
writeToBackupStream(buffer, offset, len);
}
/**
* Write to the backup stream.
* Guarantees:
* <ol>
* <li>backupStream is open</li>
* <li>blockOffset + len &lt; filePartSize</li>
* </ol>
* @param buffer buffer to write
* @param offset offset in buffer
* @param len length of write.
* @throws IOException backup stream write failing
*/
private void writeToBackupStream(byte[] buffer, int offset, int len) throws
IOException {
assert len >= 0 : "remainder to write is negative";
SwiftUtils.debug(LOG," writeToBackupStream(offset=%d, len=%d)", offset, len);
if (len == 0) {
//no remainder -downgrade to noop
return;
}
//write the new data out to the backup stream
backupStream.write(buffer, offset, len);
//increment the counters
blockOffset += len;
bytesWritten += len;
}
/**
* Upload a single partition. This deletes the local backing-file,
* and re-opens it to create a new one.
* @param closingUpload is this the final upload of an upload
* @throws IOException on IO problems
*/
@SuppressWarnings("IOResourceOpenedButNotSafelyClosed")
private void partUpload(boolean closingUpload) throws IOException {
if (backupStream != null) {
backupStream.close();
}
if (closingUpload && partUpload && backupFile.length() == 0) {
//skipping the upload if
// - it is close time
// - the final partition is 0 bytes long
// - one part has already been written
SwiftUtils.debug(LOG, "skipping upload of 0 byte final partition");
delete(backupFile);
} else {
partUpload = true;
boolean uploadSuccess = false;
int attempt = 0;
while(!uploadSuccess) {
try {
++attempt;
bytesUploaded += uploadFilePartAttempt(attempt);
uploadSuccess = true;
} catch (IOException e) {
LOG.info("Upload failed " + e, e);
if (attempt > ATTEMPT_LIMIT) {
throw e;
}
}
}
delete(backupFile);
partNumber++;
blockOffset = 0;
if (!closingUpload) {
//if not the final upload, create a new output stream
backupFile = newBackupFile();
backupStream =
new BufferedOutputStream(new FileOutputStream(backupFile));
}
}
}
@SuppressWarnings("IOResourceOpenedButNotSafelyClosed")
private long uploadFilePartAttempt(int attempt) throws IOException {
long uploadLen = backupFile.length();
SwiftUtils.debug(LOG, "Uploading part %d of file %s;" +
" localfile=%s of length %d - attempt %d",
partNumber,
key,
backupFile,
uploadLen,
attempt);
nativeStore.uploadFilePart(new Path(key),
partNumber,
new FileInputStream(backupFile),
uploadLen);
return uploadLen;
}
/**
* Get the file partition size
* @return the partition size
*/
long getFilePartSize() {
return filePartSize;
}
/**
* Query the number of partitions written
* This is intended for testing
* @return the of partitions already written to the remote FS
*/
synchronized int getPartitionsWritten() {
return partNumber - 1;
}
/**
* Get the number of bytes written to the output stream.
* This should always be less than or equal to bytesUploaded.
* @return the number of bytes written to this stream
*/
long getBytesWritten() {
return bytesWritten;
}
/**
* Get the number of bytes uploaded to remote Swift cluster.
* bytesUploaded -bytesWritten = the number of bytes left to upload
* @return the number of bytes written to the remote endpoint
*/
long getBytesUploaded() {
return bytesUploaded;
}
@Override
public String toString() {
return "SwiftNativeOutputStream{" +
", key='" + key + '\'' +
", backupFile=" + backupFile +
", closed=" + closed +
", filePartSize=" + filePartSize +
", partNumber=" + partNumber +
", blockOffset=" + blockOffset +
", partUpload=" + partUpload +
", nativeStore=" + nativeStore +
", bytesWritten=" + bytesWritten +
", bytesUploaded=" + bytesUploaded +
'}';
}
}

View File

@ -1,115 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.snative;
import java.util.Date;
/**
* Java mapping of Swift JSON file status.
* THIS FILE IS MAPPED BY JACKSON TO AND FROM JSON.
* DO NOT RENAME OR MODIFY FIELDS AND THEIR ACCESSORS.
*/
public class SwiftObjectFileStatus {
private long bytes;
private String content_type;
private String hash;
private Date last_modified;
private String name;
private String subdir;
SwiftObjectFileStatus() {
}
SwiftObjectFileStatus(long bytes, String content_type, String hash,
Date last_modified, String name) {
this.bytes = bytes;
this.content_type = content_type;
this.hash = hash;
this.last_modified = last_modified;
this.name = name;
}
public long getBytes() {
return bytes;
}
public void setBytes(long bytes) {
this.bytes = bytes;
}
public String getContent_type() {
return content_type;
}
public void setContent_type(String content_type) {
this.content_type = content_type;
}
public String getHash() {
return hash;
}
public void setHash(String hash) {
this.hash = hash;
}
public Date getLast_modified() {
return last_modified;
}
public void setLast_modified(Date last_modified) {
this.last_modified = last_modified;
}
public String getName() {
return pathToRootPath(name);
}
public void setName(String name) {
this.name = name;
}
public String getSubdir() {
return pathToRootPath(subdir);
}
public void setSubdir(String subdir) {
this.subdir = subdir;
}
/**
* If path doesn't starts with '/'
* method will concat '/'
*
* @param path specified path
* @return root path string
*/
private String pathToRootPath(String path) {
if (path == null) {
return null;
}
if (path.startsWith("/")) {
return path;
}
return "/".concat(path);
}
}

View File

@ -1,57 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.util;
public class Duration {
private final long started;
private long finished;
public Duration() {
started = time();
finished = started;
}
private long time() {
return System.currentTimeMillis();
}
public void finished() {
finished = time();
}
public String getDurationString() {
return humanTime(value());
}
public static String humanTime(long time) {
long seconds = (time / 1000);
long minutes = (seconds / 60);
return String.format("%d:%02d:%03d", minutes, seconds % 60, time % 1000);
}
@Override
public String toString() {
return getDurationString();
}
public long value() {
return finished -started;
}
}

View File

@ -1,154 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.util;
/**
* Build ongoing statistics from duration data
*/
public class DurationStats {
final String operation;
int n;
long sum;
long min;
long max;
double mean, m2;
/**
* Construct statistics for a given operation.
* @param operation operation
*/
public DurationStats(String operation) {
this.operation = operation;
reset();
}
/**
* construct from anothr stats entry;
* all value are copied.
* @param that the source statistics
*/
public DurationStats(DurationStats that) {
operation = that.operation;
n = that.n;
sum = that.sum;
min = that.min;
max = that.max;
mean = that.mean;
m2 = that.m2;
}
/**
* Add a duration
* @param duration the new duration
*/
public void add(Duration duration) {
add(duration.value());
}
/**
* Add a number
* @param x the number
*/
public void add(long x) {
n++;
sum += x;
double delta = x - mean;
mean += delta / n;
m2 += delta * (x - mean);
if (x < min) {
min = x;
}
if (x > max) {
max = x;
}
}
/**
* Reset the data
*/
public void reset() {
n = 0;
sum = 0;
sum = 0;
min = 10000000;
max = 0;
mean = 0;
m2 = 0;
}
/**
* Get the number of entries sampled
* @return the number of durations added
*/
public int getCount() {
return n;
}
/**
* Get the sum of all durations
* @return all the durations
*/
public long getSum() {
return sum;
}
/**
* Get the arithmetic mean of the aggregate statistics
* @return the arithmetic mean
*/
public double getArithmeticMean() {
return mean;
}
/**
* Variance, sigma^2
* @return variance, or, if no samples are there, 0.
*/
public double getVariance() {
return n > 0 ? (m2 / (n - 1)) : 0;
}
/**
* Get the std deviation, sigma
* @return the stddev, 0 may mean there are no samples.
*/
public double getDeviation() {
double variance = getVariance();
return (variance > 0) ? Math.sqrt(variance) : 0;
}
/**
* Covert to a useful string
* @return a human readable summary
*/
@Override
public String toString() {
return String.format(
"%s count=%d total=%.3fs mean=%.3fs stddev=%.3fs min=%.3fs max=%.3fs",
operation,
n,
sum / 1000.0,
mean / 1000.0,
getDeviation() / 1000000.0,
min / 1000.0,
max / 1000.0);
}
}

View File

@ -1,77 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.util;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* Build a duration stats table to which you can add statistics.
* Designed to be multithreaded
*/
public class DurationStatsTable {
private Map<String,DurationStats> statsTable
= new HashMap<String, DurationStats>(6);
/**
* Add an operation
* @param operation operation name
* @param duration duration
*/
public void add(String operation, Duration duration, boolean success) {
DurationStats durationStats;
String key = operation;
if (!success) {
key += "-FAIL";
}
synchronized (this) {
durationStats = statsTable.get(key);
if (durationStats == null) {
durationStats = new DurationStats(key);
statsTable.put(key, durationStats);
}
}
synchronized (durationStats) {
durationStats.add(duration);
}
}
/**
* Get the current duration statistics
* @return a snapshot of the statistics
*/
public synchronized List<DurationStats> getDurationStatistics() {
List<DurationStats> results = new ArrayList<DurationStats>(statsTable.size());
for (DurationStats stat: statsTable.values()) {
results.add(new DurationStats(stat));
}
return results;
}
/**
* reset the values of the statistics. This doesn't delete them, merely zeroes them.
*/
public synchronized void reset() {
for (DurationStats stat : statsTable.values()) {
stat.reset();
}
}
}

View File

@ -1,130 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.util;
import org.apache.hadoop.fs.swift.exceptions.SwiftJsonMarshallingException;
import org.codehaus.jackson.JsonGenerationException;
import org.codehaus.jackson.map.JsonMappingException;
import org.codehaus.jackson.map.ObjectMapper;
import org.codehaus.jackson.map.type.CollectionType;
import org.codehaus.jackson.type.TypeReference;
import java.io.IOException;
import java.io.StringWriter;
import java.io.Writer;
public class JSONUtil {
private static ObjectMapper jsonMapper = new ObjectMapper();
/**
* Private constructor.
*/
private JSONUtil() {
}
/**
* Converting object to JSON string. If errors appears throw
* MeshinException runtime exception.
*
* @param object The object to convert.
* @return The JSON string representation.
* @throws IOException IO issues
* @throws SwiftJsonMarshallingException failure to generate JSON
*/
public static String toJSON(Object object) throws
IOException {
Writer json = new StringWriter();
try {
jsonMapper.writeValue(json, object);
return json.toString();
} catch (JsonGenerationException e) {
throw new SwiftJsonMarshallingException(e.toString(), e);
} catch (JsonMappingException e) {
throw new SwiftJsonMarshallingException(e.toString(), e);
}
}
/**
* Convert string representation to object. If errors appears throw
* Exception runtime exception.
*
* @param value The JSON string.
* @param klazz The class to convert.
* @return The Object of the given class.
*/
public static <T> T toObject(String value, Class<T> klazz) throws
IOException {
try {
return jsonMapper.readValue(value, klazz);
} catch (JsonGenerationException e) {
throw new SwiftJsonMarshallingException(e.toString()
+ " source: " + value,
e);
} catch (JsonMappingException e) {
throw new SwiftJsonMarshallingException(e.toString()
+ " source: " + value,
e);
}
}
/**
* @param value json string
* @param typeReference class type reference
* @param <T> type
* @return deserialized T object
*/
public static <T> T toObject(String value,
final TypeReference<T> typeReference)
throws IOException {
try {
return jsonMapper.readValue(value, typeReference);
} catch (JsonGenerationException e) {
throw new SwiftJsonMarshallingException("Error generating response", e);
} catch (JsonMappingException e) {
throw new SwiftJsonMarshallingException("Error generating response", e);
}
}
/**
* @param value json string
* @param collectionType class describing how to deserialize collection of objects
* @param <T> type
* @return deserialized T object
*/
public static <T> T toObject(String value,
final CollectionType collectionType)
throws IOException {
try {
return jsonMapper.readValue(value, collectionType);
} catch (JsonGenerationException e) {
throw new SwiftJsonMarshallingException(e.toString()
+ " source: " + value,
e);
} catch (JsonMappingException e) {
throw new SwiftJsonMarshallingException(e.toString()
+ " source: " + value,
e);
}
}
public static ObjectMapper getJsonMapper() {
return jsonMapper;
}
}

View File

@ -1,183 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.util;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException;
import org.apache.hadoop.fs.swift.http.RestClientBindings;
import java.net.URI;
import java.util.regex.Pattern;
/**
* Swift hierarchy mapping of (container, path)
*/
public final class SwiftObjectPath {
private static final Pattern PATH_PART_PATTERN = Pattern.compile(".*/AUTH_\\w*/");
/**
* Swift container
*/
private final String container;
/**
* swift object
*/
private final String object;
private final String uriPath;
/**
* Build an instance from a (host, object) pair
*
* @param container container name
* @param object object ref underneath the container
*/
public SwiftObjectPath(String container, String object) {
this.container = container;
this.object = object;
uriPath = buildUriPath();
}
public String getContainer() {
return container;
}
public String getObject() {
return object;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof SwiftObjectPath)) return false;
final SwiftObjectPath that = (SwiftObjectPath) o;
return this.toUriPath().equals(that.toUriPath());
}
@Override
public int hashCode() {
int result = container.hashCode();
result = 31 * result + object.hashCode();
return result;
}
private String buildUriPath() {
return SwiftUtils.joinPaths(container, object);
}
public String toUriPath() {
return uriPath;
}
@Override
public String toString() {
return toUriPath();
}
/**
* Test for the object matching a path, ignoring the container
* value.
*
* @param path path string
* @return true iff the object's name matches the path
*/
public boolean objectMatches(String path) {
return object.equals(path);
}
/**
* Query to see if the possibleChild object is a child path of this.
* object.
*
* The test is done by probing for the path of the this object being
* at the start of the second -with a trailing slash, and both
* containers being equal
*
* @param possibleChild possible child dir
* @return true iff the possibleChild is under this object
*/
public boolean isEqualToOrParentOf(SwiftObjectPath possibleChild) {
String origPath = toUriPath();
String path = origPath;
if (!path.endsWith("/")) {
path = path + "/";
}
String childPath = possibleChild.toUriPath();
return childPath.equals(origPath) || childPath.startsWith(path);
}
/**
* Create a path tuple of (container, path), where the container is
* chosen from the host of the URI.
*
* @param uri uri to start from
* @param path path underneath
* @return a new instance.
* @throws SwiftConfigurationException if the URI host doesn't parse into
* container.service
*/
public static SwiftObjectPath fromPath(URI uri,
Path path)
throws SwiftConfigurationException {
return fromPath(uri, path, false);
}
/**
* Create a path tuple of (container, path), where the container is
* chosen from the host of the URI.
* A trailing slash can be added to the path. This is the point where
* these /-es need to be appended, because when you construct a {@link Path}
* instance, {@link Path#normalizePath(String, String)} is called
* -which strips off any trailing slash.
*
* @param uri uri to start from
* @param path path underneath
* @param addTrailingSlash should a trailing slash be added if there isn't one.
* @return a new instance.
* @throws SwiftConfigurationException if the URI host doesn't parse into
* container.service
*/
public static SwiftObjectPath fromPath(URI uri,
Path path,
boolean addTrailingSlash)
throws SwiftConfigurationException {
String url =
path.toUri().getPath().replaceAll(PATH_PART_PATTERN.pattern(), "");
//add a trailing slash if needed
if (addTrailingSlash && !url.endsWith("/")) {
url += "/";
}
String container = uri.getHost();
if (container == null) {
//no container, not good: replace with ""
container = "";
} else if (container.contains(".")) {
//its a container.service URI. Strip the container
container = RestClientBindings.extractContainerName(container);
}
return new SwiftObjectPath(container, url);
}
}

View File

@ -1,544 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.util;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException;
import org.junit.internal.AssumptionViolatedException;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import java.util.Properties;
/**
* Utilities used across test cases
*/
public class SwiftTestUtils extends org.junit.Assert {
private static final Log LOG =
LogFactory.getLog(SwiftTestUtils.class);
public static final String TEST_FS_SWIFT = "test.fs.swift.name";
public static final String IO_FILE_BUFFER_SIZE = "io.file.buffer.size";
/**
* Get the test URI
* @param conf configuration
* @throws SwiftConfigurationException missing parameter or bad URI
*/
public static URI getServiceURI(Configuration conf) throws
SwiftConfigurationException {
String instance = conf.get(TEST_FS_SWIFT);
if (instance == null) {
throw new SwiftConfigurationException(
"Missing configuration entry " + TEST_FS_SWIFT);
}
try {
return new URI(instance);
} catch (URISyntaxException e) {
throw new SwiftConfigurationException("Bad URI: " + instance);
}
}
public static boolean hasServiceURI(Configuration conf) {
String instance = conf.get(TEST_FS_SWIFT);
return instance != null;
}
/**
* Assert that a property in the property set matches the expected value
* @param props property set
* @param key property name
* @param expected expected value. If null, the property must not be in the set
*/
public static void assertPropertyEquals(Properties props,
String key,
String expected) {
String val = props.getProperty(key);
if (expected == null) {
assertNull("Non null property " + key + " = " + val, val);
} else {
assertEquals("property " + key + " = " + val,
expected,
val);
}
}
/**
*
* Write a file and read it in, validating the result. Optional flags control
* whether file overwrite operations should be enabled, and whether the
* file should be deleted afterwards.
*
* If there is a mismatch between what was written and what was expected,
* a small range of bytes either side of the first error are logged to aid
* diagnosing what problem occurred -whether it was a previous file
* or a corrupting of the current file. This assumes that two
* sequential runs to the same path use datasets with different character
* moduli.
*
* @param fs filesystem
* @param path path to write to
* @param len length of data
* @param overwrite should the create option allow overwrites?
* @param delete should the file be deleted afterwards? -with a verification
* that it worked. Deletion is not attempted if an assertion has failed
* earlier -it is not in a <code>finally{}</code> block.
* @throws IOException IO problems
*/
public static void writeAndRead(FileSystem fs,
Path path,
byte[] src,
int len,
int blocksize,
boolean overwrite,
boolean delete) throws IOException {
fs.mkdirs(path.getParent());
writeDataset(fs, path, src, len, blocksize, overwrite);
byte[] dest = readDataset(fs, path, len);
compareByteArrays(src, dest, len);
if (delete) {
boolean deleted = fs.delete(path, false);
assertTrue("Deleted", deleted);
assertPathDoesNotExist(fs, "Cleanup failed", path);
}
}
/**
* Write a file.
* Optional flags control
* whether file overwrite operations should be enabled
* @param fs filesystem
* @param path path to write to
* @param len length of data
* @param overwrite should the create option allow overwrites?
* @throws IOException IO problems
*/
public static void writeDataset(FileSystem fs,
Path path,
byte[] src,
int len,
int blocksize,
boolean overwrite) throws IOException {
assertTrue(
"Not enough data in source array to write " + len + " bytes",
src.length >= len);
FSDataOutputStream out = fs.create(path,
overwrite,
fs.getConf()
.getInt(IO_FILE_BUFFER_SIZE,
4096),
(short) 1,
blocksize);
out.write(src, 0, len);
out.close();
assertFileHasLength(fs, path, len);
}
/**
* Read the file and convert to a byte dataaset
* @param fs filesystem
* @param path path to read from
* @param len length of data to read
* @return the bytes
* @throws IOException IO problems
*/
public static byte[] readDataset(FileSystem fs, Path path, int len)
throws IOException {
FSDataInputStream in = fs.open(path);
byte[] dest = new byte[len];
try {
in.readFully(0, dest);
} finally {
in.close();
}
return dest;
}
/**
* Assert that tthe array src[0..len] and dest[] are equal
* @param src source data
* @param dest actual
* @param len length of bytes to compare
*/
public static void compareByteArrays(byte[] src,
byte[] dest,
int len) {
assertEquals("Number of bytes read != number written",
len, dest.length);
int errors = 0;
int first_error_byte = -1;
for (int i = 0; i < len; i++) {
if (src[i] != dest[i]) {
if (errors == 0) {
first_error_byte = i;
}
errors++;
}
}
if (errors > 0) {
String message = String.format(" %d errors in file of length %d",
errors, len);
LOG.warn(message);
// the range either side of the first error to print
// this is a purely arbitrary number, to aid user debugging
final int overlap = 10;
for (int i = Math.max(0, first_error_byte - overlap);
i < Math.min(first_error_byte + overlap, len);
i++) {
byte actual = dest[i];
byte expected = src[i];
String letter = toChar(actual);
String line = String.format("[%04d] %2x %s\n", i, actual, letter);
if (expected != actual) {
line = String.format("[%04d] %2x %s -expected %2x %s\n",
i,
actual,
letter,
expected,
toChar(expected));
}
LOG.warn(line);
}
fail(message);
}
}
/**
* Convert a byte to a character for printing. If the
* byte value is < 32 -and hence unprintable- the byte is
* returned as a two digit hex value
* @param b byte
* @return the printable character string
*/
public static String toChar(byte b) {
if (b >= 0x20) {
return Character.toString((char) b);
} else {
return String.format("%02x", b);
}
}
public static String toChar(byte[] buffer) {
StringBuilder builder = new StringBuilder(buffer.length);
for (byte b : buffer) {
builder.append(toChar(b));
}
return builder.toString();
}
public static byte[] toAsciiByteArray(String s) {
char[] chars = s.toCharArray();
int len = chars.length;
byte[] buffer = new byte[len];
for (int i = 0; i < len; i++) {
buffer[i] = (byte) (chars[i] & 0xff);
}
return buffer;
}
public static void cleanupInTeardown(FileSystem fileSystem,
String cleanupPath) {
cleanup("TEARDOWN", fileSystem, cleanupPath);
}
public static void cleanup(String action,
FileSystem fileSystem,
String cleanupPath) {
noteAction(action);
try {
if (fileSystem != null) {
fileSystem.delete(new Path(cleanupPath).makeQualified(fileSystem),
true);
}
} catch (Exception e) {
LOG.error("Error deleting in "+ action + " - " + cleanupPath + ": " + e, e);
}
}
public static void noteAction(String action) {
if (LOG.isDebugEnabled()) {
LOG.debug("============== "+ action +" =============");
}
}
/**
* downgrade a failure to a message and a warning, then an
* exception for the Junit test runner to mark as failed
* @param message text message
* @param failure what failed
* @throws AssumptionViolatedException always
*/
public static void downgrade(String message, Throwable failure) {
LOG.warn("Downgrading test " + message, failure);
AssumptionViolatedException ave =
new AssumptionViolatedException(failure, null);
throw ave;
}
/**
* report an overridden test as unsupported
* @param message message to use in the text
* @throws AssumptionViolatedException always
*/
public static void unsupported(String message) {
throw new AssumptionViolatedException(message);
}
/**
* report a test has been skipped for some reason
* @param message message to use in the text
* @throws AssumptionViolatedException always
*/
public static void skip(String message) {
throw new AssumptionViolatedException(message);
}
/**
* Make an assertion about the length of a file
* @param fs filesystem
* @param path path of the file
* @param expected expected length
* @throws IOException on File IO problems
*/
public static void assertFileHasLength(FileSystem fs, Path path,
int expected) throws IOException {
FileStatus status = fs.getFileStatus(path);
assertEquals(
"Wrong file length of file " + path + " status: " + status,
expected,
status.getLen());
}
/**
* Assert that a path refers to a directory
* @param fs filesystem
* @param path path of the directory
* @throws IOException on File IO problems
*/
public static void assertIsDirectory(FileSystem fs,
Path path) throws IOException {
FileStatus fileStatus = fs.getFileStatus(path);
assertIsDirectory(fileStatus);
}
/**
* Assert that a path refers to a directory
* @param fileStatus stats to check
*/
public static void assertIsDirectory(FileStatus fileStatus) {
assertTrue("Should be a dir -but isn't: " + fileStatus,
fileStatus.isDir());
}
/**
* Write the text to a file, returning the converted byte array
* for use in validating the round trip
* @param fs filesystem
* @param path path of file
* @param text text to write
* @param overwrite should the operation overwrite any existing file?
* @return the read bytes
* @throws IOException on IO problems
*/
public static byte[] writeTextFile(FileSystem fs,
Path path,
String text,
boolean overwrite) throws IOException {
FSDataOutputStream stream = fs.create(path, overwrite);
byte[] bytes = new byte[0];
if (text != null) {
bytes = toAsciiByteArray(text);
stream.write(bytes);
}
stream.close();
return bytes;
}
/**
* Touch a file: fails if it is already there
* @param fs filesystem
* @param path path
* @throws IOException IO problems
*/
public static void touch(FileSystem fs,
Path path) throws IOException {
fs.delete(path, true);
writeTextFile(fs, path, null, false);
}
public static void assertDeleted(FileSystem fs,
Path file,
boolean recursive) throws IOException {
assertPathExists(fs, "about to be deleted file", file);
boolean deleted = fs.delete(file, recursive);
String dir = ls(fs, file.getParent());
assertTrue("Delete failed on " + file + ": " + dir, deleted);
assertPathDoesNotExist(fs, "Deleted file", file);
}
/**
* Read in "length" bytes, convert to an ascii string
* @param fs filesystem
* @param path path to read
* @param length #of bytes to read.
* @return the bytes read and converted to a string
* @throws IOException
*/
public static String readBytesToString(FileSystem fs,
Path path,
int length) throws IOException {
FSDataInputStream in = fs.open(path);
try {
byte[] buf = new byte[length];
in.readFully(0, buf);
return toChar(buf);
} finally {
in.close();
}
}
public static String getDefaultWorkingDirectory() {
return "/user/" + System.getProperty("user.name");
}
public static String ls(FileSystem fileSystem, Path path) throws IOException {
return SwiftUtils.ls(fileSystem, path);
}
public static String dumpStats(String pathname, FileStatus[] stats) {
return pathname + SwiftUtils.fileStatsToString(stats,"\n");
}
/**
/**
* Assert that a file exists and whose {@link FileStatus} entry
* declares that this is a file and not a symlink or directory.
* @param fileSystem filesystem to resolve path against
* @param filename name of the file
* @throws IOException IO problems during file operations
*/
public static void assertIsFile(FileSystem fileSystem, Path filename) throws
IOException {
assertPathExists(fileSystem, "Expected file", filename);
FileStatus status = fileSystem.getFileStatus(filename);
String fileInfo = filename + " " + status;
assertFalse("File claims to be a directory " + fileInfo,
status.isDir());
/* disabled for Hadoop v1 compatibility
assertFalse("File claims to be a symlink " + fileInfo,
status.isSymlink());
*/
}
/**
* Create a dataset for use in the tests; all data is in the range
* base to (base+modulo-1) inclusive
* @param len length of data
* @param base base of the data
* @param modulo the modulo
* @return the newly generated dataset
*/
public static byte[] dataset(int len, int base, int modulo) {
byte[] dataset = new byte[len];
for (int i = 0; i < len; i++) {
dataset[i] = (byte) (base + (i % modulo));
}
return dataset;
}
/**
* Assert that a path exists -but make no assertions as to the
* type of that entry
*
* @param fileSystem filesystem to examine
* @param message message to include in the assertion failure message
* @param path path in the filesystem
* @throws IOException IO problems
*/
public static void assertPathExists(FileSystem fileSystem, String message,
Path path) throws IOException {
if (!fileSystem.exists(path)) {
//failure, report it
fail(message + ": not found " + path + " in " + path.getParent());
ls(fileSystem, path.getParent());
}
}
/**
* Assert that a path does not exist
*
* @param fileSystem filesystem to examine
* @param message message to include in the assertion failure message
* @param path path in the filesystem
* @throws IOException IO problems
*/
public static void assertPathDoesNotExist(FileSystem fileSystem,
String message,
Path path) throws IOException {
try {
FileStatus status = fileSystem.getFileStatus(path);
fail(message + ": unexpectedly found " + path + " as " + status);
} catch (FileNotFoundException expected) {
//this is expected
}
}
/**
* Assert that a FileSystem.listStatus on a dir finds the subdir/child entry
* @param fs filesystem
* @param dir directory to scan
* @param subdir full path to look for
* @throws IOException IO probles
*/
public static void assertListStatusFinds(FileSystem fs,
Path dir,
Path subdir) throws IOException {
FileStatus[] stats = fs.listStatus(dir);
boolean found = false;
StringBuilder builder = new StringBuilder();
for (FileStatus stat : stats) {
builder.append(stat.toString()).append('\n');
if (stat.getPath().equals(subdir)) {
found = true;
}
}
assertTrue("Path " + subdir
+ " not found in directory " + dir + ":" + builder,
found);
}
}

View File

@ -1,193 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift.util;
import org.apache.commons.logging.Log;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import java.io.FileNotFoundException;
import java.io.IOException;
/**
* Various utility classes for SwiftFS support
*/
public final class SwiftUtils {
public static final String READ = "read(buffer, offset, length)";
/**
* Join two (non null) paths, inserting a forward slash between them
* if needed
*
* @param path1 first path
* @param path2 second path
* @return the combined path
*/
public static String joinPaths(String path1, String path2) {
StringBuilder result =
new StringBuilder(path1.length() + path2.length() + 1);
result.append(path1);
boolean insertSlash = true;
if (path1.endsWith("/")) {
insertSlash = false;
} else if (path2.startsWith("/")) {
insertSlash = false;
}
if (insertSlash) {
result.append("/");
}
result.append(path2);
return result.toString();
}
/**
* Predicate: Is a swift object referring to the root direcory?
* @param swiftObject object to probe
* @return true iff the object refers to the root
*/
public static boolean isRootDir(SwiftObjectPath swiftObject) {
return swiftObject.objectMatches("") || swiftObject.objectMatches("/");
}
/**
* Sprintf() to the log iff the log is at debug level. If the log
* is not at debug level, the printf operation is skipped, so
* no time is spent generating the string.
* @param log log to use
* @param text text message
* @param args args arguments to the print statement
*/
public static void debug(Log log, String text, Object... args) {
if (log.isDebugEnabled()) {
log.debug(String.format(text, args));
}
}
/**
* Log an exception (in text and trace) iff the log is at debug
* @param log Log to use
* @param text text message
* @param ex exception
*/
public static void debugEx(Log log, String text, Exception ex) {
if (log.isDebugEnabled()) {
log.debug(text + ex, ex);
}
}
/**
* Sprintf() to the log iff the log is at trace level. If the log
* is not at trace level, the printf operation is skipped, so
* no time is spent generating the string.
* @param log log to use
* @param text text message
* @param args args arguments to the print statement
*/
public static void trace(Log log, String text, Object... args) {
if (log.isTraceEnabled()) {
log.trace(String.format(text, args));
}
}
/**
* Given a partition number, calculate the partition value.
* This is used in the SwiftNativeOutputStream, and is placed
* here for tests to be able to calculate the filename of
* a partition.
* @param partNumber part number
* @return a string to use as the filename
*/
public static String partitionFilenameFromNumber(int partNumber) {
return String.format("%06d", partNumber);
}
/**
* List a a path to string
* @param fileSystem filesystem
* @param path directory
* @return a listing of the filestatuses of elements in the directory, one
* to a line, precedeed by the full path of the directory
* @throws IOException connectivity problems
*/
public static String ls(FileSystem fileSystem, Path path) throws
IOException {
if (path == null) {
//surfaces when someone calls getParent() on something at the top of the path
return "/";
}
FileStatus[] stats;
String pathtext = "ls " + path;
try {
stats = fileSystem.listStatus(path);
} catch (FileNotFoundException e) {
return pathtext + " -file not found";
} catch (IOException e) {
return pathtext + " -failed: " + e;
}
return pathtext + fileStatsToString(stats, "\n");
}
/**
* Take an array of filestats and convert to a string (prefixed w/ a [01] counter
* @param stats array of stats
* @param separator separator after every entry
* @return a stringified set
*/
public static String fileStatsToString(FileStatus[] stats, String separator) {
StringBuilder buf = new StringBuilder(stats.length * 128);
for (int i = 0; i < stats.length; i++) {
buf.append(String.format("[%02d] %s", i, stats[i])).append(separator);
}
return buf.toString();
}
/**
* Verify that the basic args to a read operation are valid;
* throws an exception if not -with meaningful text includeing
* @param buffer destination buffer
* @param off offset
* @param len number of bytes to read
* @throws NullPointerException null buffer
* @throws IndexOutOfBoundsException on any invalid range.
*/
public static void validateReadArgs(byte[] buffer, int off, int len) {
if (buffer == null) {
throw new NullPointerException("Null byte array in"+ READ);
}
if (off < 0 ) {
throw new IndexOutOfBoundsException("Negative buffer offset "
+ off
+ " in " + READ);
}
if (len < 0 ) {
throw new IndexOutOfBoundsException("Negative read length "
+ len
+ " in " + READ);
}
if (off > buffer.length) {
throw new IndexOutOfBoundsException("Buffer offset of "
+ off
+ "beyond buffer size of "
+ buffer.length
+ " in " + READ);
}
}
}

View File

@ -1,686 +0,0 @@
~~ Licensed under the Apache License, Version 2.0 (the "License");
~~ you may not use this file except in compliance with the License.
~~ You may obtain a copy of the License at
~~
~~ http://www.apache.org/licenses/LICENSE-2.0
~~
~~ Unless required by applicable law or agreed to in writing, software
~~ distributed under the License is distributed on an "AS IS" BASIS,
~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~~ See the License for the specific language governing permissions and
~~ limitations under the License. See accompanying LICENSE file.
---
Hadoop OpenStack Support: Swift Object Store
---
---
${maven.build.timestamp}
%{toc|section=1|fromDepth=0}
Hadoop OpenStack Support: Swift Object Store
* {Introduction}
{{{http://www.openstack.org/}OpenStack}} is an open source cloud infrastructure
which can be accessed
from multiple public IaaS providers, and deployed privately. It offers
infrastructure services such as VM hosting (Nova), authentication (Keystone)
and storage of binary objects (Swift).
This module enables Apache Hadoop applications -including MapReduce jobs,
read and write data to and from instances of the
{{{http://www.openstack.org/software/openstack-storage/}OpenStack Swift object store}}.
* Features
* Read and write of data stored in a Swift object store
* Support of a pseudo-hierachical file system (directories, subdirectories and
files)
* Standard filesystem operations: <<<create>>>, <<<delete>>>, <<<mkdir>>>,
<<<ls>>>, <<<mv>>>, <<<stat>>>.
* Can act as a source of data in a MapReduce job, or a sink.
* Support for multiple OpenStack services, and multiple containers from a
single service.
* Supports in-cluster and remote access to Swift data.
* Supports OpenStack Keystone authentication with password or token.
* Released under the Apache Software License
* Tested against the Hadoop 3.x and 1.x branches, against multiple public
OpenStack clusters: Rackspace US, Rackspace UK, HP Cloud.
* Tested against private OpenStack clusters, including scalability tests of
large file uploads.
* Using the Hadoop Swift Filesystem Client
** Concepts: services and containers
OpenStack swift is an <Object Store>; also known as a <blobstore>. It stores
arbitrary binary objects by name in a <container>.
The Hadoop Swift filesystem library adds another concept, the <service>, which
defines which Swift blobstore hosts a container -and how to connect to it.
** Containers and Objects
* Containers are created by users with accounts on the Swift filestore, and hold
<objects>.
* Objects can be zero bytes long, or they can contain data.
* Objects in the container can be up to 5GB; there is a special support for
larger files than this, which merges multiple objects in to one.
* Each object is referenced by it's <name>; there is no notion of directories.
* You can use any characters in an object name that can be 'URL-encoded'; the
maximum length of a name is 1034 characters -after URL encoding.
* Names can have <<</>>> characters in them, which are used to create the illusion of
a directory structure. For example <<<dir/dir2/name>>>. Even though this looks
like a directory, <it is still just a name>. There is no requirement to have
any entries in the container called <<<dir>>> or <<<dir/dir2>>>
* That said. if the container has zero-byte objects that look like directory
names above other objects, they can pretend to be directories. Continuing the
example, a 0-byte object called <<<dir>>> would tell clients that it is a
directory while <<<dir/dir2>>> or <<<dir/dir2/name>>> were present. This creates an
illusion of containers holding a filesystem.
Client applications talk to Swift over HTTP or HTTPS, reading, writing and
deleting objects using standard HTTP operations (GET, PUT and DELETE,
respectively). There is also a COPY operation, that creates a new object in the
container, with a new name, containing the old data. There is no rename
operation itself, objects need to be copied -then the original entry deleted.
** Eventual Consistency
The Swift Filesystem is *eventually consistent*: an operation on an object may
not be immediately visible to that client, or other clients. This is a
consequence of the goal of the filesystem: to span a set of machines, across
multiple datacenters, in such a way that the data can still be available when
many of them fail. (In contrast, the Hadoop HDFS filesystem is *immediately
consistent*, but it does not span datacenters.)
Eventual consistency can cause surprises for client applications that expect
immediate consistency: after an object is deleted or overwritten, the object
may still be visible -or the old data still retrievable. The Swift Filesystem
client for Apache Hadoop attempts to handle this, in conjunction with the
MapReduce engine, but there may be still be occasions when eventual consistency
causes surprises.
** Non-atomic "directory" operations.
Hadoop expects some
operations to be atomic, especially <<<rename()>>>, which is something
the MapReduce layer relies on to commit the output of a job, renaming data
from a temp directory to the final path. Because a rename
is implemented as a copy of every blob under the directory's path, followed
by a delete of the originals, the intermediate state of the operation
will be visible to other clients. If two Reducer tasks to rename their temp
directory to the final path, both operations may succeed, with the result that
output directory contains mixed data. This can happen if MapReduce jobs
are being run with <speculation> enabled and Swift used as the direct output
of the MR job (it can also happen against Amazon S3).
Other consequences of the non-atomic operations are:
1. If a program is looking for the presence of the directory before acting
on the data -it may start prematurely. This can be avoided by using
other mechanisms to co-ordinate the programs, such as the presence of a file
that is written <after> any bulk directory operations.
2. A <<<rename()>>> or <<<delete()>>> operation may include files added under
the source directory tree during the operation, may unintentionally delete
it, or delete the 0-byte swift entries that mimic directories and act
as parents for the files. Try to avoid doing this.
The best ways to avoid all these problems is not using Swift as
the filesystem between MapReduce jobs or other Hadoop workflows. It
can act as a source of data, and a final destination, but it doesn't meet
all of Hadoop's expectations of what a filesystem is -it's a <blobstore>.
* Working with Swift Object Stores in Hadoop
Once installed, the Swift FileSystem client can be used by any Hadoop application
to read from or write to data stored in a Swift container.
Data stored in Swift can be used as the direct input to a MapReduce job
-simply use the <<<swift:>>> URL (see below) to declare the source of the data.
This Swift Filesystem client is designed to work with multiple
Swift object stores, both public and private. This allows the client to work
with different clusters, reading and writing data to and from either of them.
It can also work with the same object stores using multiple login details.
These features are achieved by one basic concept: using a service name in
the URI referring to a swift filesystem, and looking up all the connection and
login details for that specific service. Different service names can be defined
in the Hadoop XML configuration file, so defining different clusters, or
providing different login details for the same object store(s).
** Swift Filesystem URIs
Hadoop uses URIs to refer to files within a filesystem. Some common examples
are:
+--
local://etc/hosts
hdfs://cluster1/users/example/data/set1
hdfs://cluster2.example.org:8020/users/example/data/set1
+--
The Swift Filesystem Client adds a new URL type <<<swift>>>. In a Swift Filesystem
URL, the hostname part of a URL identifies the container and the service to
work with; the path the name of the object. Here are some examples
+--
swift://container.rackspace/my-object.csv
swift://data.hpcloud/data/set1
swift://dmitry.privatecloud/out/results
+--
In the last two examples, the paths look like directories: it is not, they are
simply the objects named <<<data/set1>>> and <<<out/results>>> respectively.
** Installing
The <<<hadoop-openstack>>> JAR must be on the classpath of the Hadoop program trying to
talk to the Swift service. If installed in the classpath of the Hadoop
MapReduce service, then all programs started by the MR engine will pick up the
JAR automatically. This is the easiest way to give all Hadoop jobs access to
Swift.
Alternatively, the JAR can be included as one of the JAR files that an
application uses. This lets the Hadoop jobs work with a Swift object store even
if the Hadoop cluster is not pre-configured for this.
The library also depends upon the Apache HttpComponents library, which
must also be on the classpath.
** Configuring
To talk to a swift service, the user must must provide:
[[1]] The URL defining the container and the service.
[[1]] In the cluster/job configuration, the login details of that service.
Multiple service definitions can co-exist in the same configuration file: just
use different names for them.
*** Example: Rackspace US, in-cluster access using API key
This service definition is for use in a Hadoop cluster deployed within Rackspace's
US infrastructure.
+--
<property>
<name>fs.swift.service.rackspace.auth.url</name>
<value>https://auth.api.rackspacecloud.com/v2.0/tokens</value>
<description>Rackspace US (multiregion)</description>
</property>
<property>
<name>fs.swift.service.rackspace.username</name>
<value>user4</value>
</property>
<property>
<name>fs.swift.service.rackspace.region</name>
<value>DFW</value>
</property>
<property>
<name>fs.swift.service.rackspace.apikey</name>
<value>fe806aa86dfffe2f6ed8</value>
</property>
+--
Here the API key visible in the account settings API keys page is used to log
in. No property for public/private access -the default is to use the private
endpoint for Swift operations.
This configuration also selects one of the regions, DFW, for its data.
A reference to this service would use the <<<rackspace>>> service name:
---
swift://hadoop-container.rackspace/
---
*** Example: Rackspace UK: remote access with password authentication
This connects to Rackspace's UK ("LON") datacenter.
+--
<property>
<name>fs.swift.service.rackspaceuk.auth.url</name>
<value>https://lon.identity.api.rackspacecloud.com/v2.0/tokens</value>
<description>Rackspace UK</description>
</property>
<property>
<name>fs.swift.service.rackspaceuk.username</name>
<value>user4</value>
</property>
<property>
<name>fs.swift.service.rackspaceuk.password</name>
<value>insert-password-here/value>
</property>
<property>
<name>fs.swift.service.rackspace.public</name>
<value>true</value>
</property>
+--
This is a public access point connection, using a password over an API key.
A reference to this service would use the <<<rackspaceuk>>> service name:
+--
swift://hadoop-container.rackspaceuk/
+--
Because the public endpoint is used, if this service definition is used within
the London datacenter, all accesses will be billed at the public
upload/download rates, <irrespective of where the Hadoop cluster is>.
*** Example: HP cloud service definition
Here is an example that connects to the HP Cloud object store.
+--
<property>
<name>fs.swift.service.hpcloud.auth.url</name>
<value>https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/tokens
</value>
<description>HP Cloud</description>
</property>
<property>
<name>fs.swift.service.hpcloud.tenant</name>
<value>FE806AA86</value>
</property>
<property>
<name>fs.swift.service.hpcloud.username</name>
<value>FE806AA86DFFFE2F6ED8</value>
</property>
<property>
<name>fs.swift.service.hpcloud.password</name>
<value>secret-password-goes-here</value>
</property>
<property>
<name>fs.swift.service.hpcloud.public</name>
<value>true</value>
</property>
+--
A reference to this service would use the <<<hpcloud>>> service name:
+--
swift://hadoop-container.hpcloud/
+--
** General Swift Filesystem configuration options
Some configuration options apply to the Swift client, independent of
the specific Swift filesystem chosen.
*** Blocksize fs.swift.blocksize
Swift does not break up files into blocks, except in the special case of files
over 5GB in length. Accordingly, there isn't a notion of a "block size"
to define where the data is kept.
Hadoop's MapReduce layer depends on files declaring their block size,
so that it knows how to partition work. Too small a blocksize means that
many mappers work on small pieces of data; too large a block size means
that only a few mappers get started.
The block size value reported by Swift, therefore, controls the basic workload
partioning of the MapReduce engine -and can be an important parameter to
tune for performance of the cluster.
The property has a unit of kilobytes; the default value is <<<32*1024>>>: 32 MB
+--
<property>
<name>fs.swift.blocksize</name>
<value>32768</value>
</property>
+--
This blocksize has no influence on how files are stored in Swift; it only controls
what the reported size of blocks are - a value used in Hadoop MapReduce to
divide work.
Note that the MapReduce engine's split logic can be tuned independently by setting
the <<<mapred.min.split.size>>> and <<<mapred.max.split.size>>> properties,
which can be done in specific job configurations.
+--
<property>
<name>mapred.min.split.size</name>
<value>524288</value>
</property>
<property>
<name>mapred.max.split.size</name>
<value>1048576</value>
</property>
+--
In an Apache Pig script, these properties would be set as:
---
mapred.min.split.size 524288
mapred.max.split.size 1048576
---
*** Partition size fs.swift.partsize
The Swift filesystem client breaks very large files into partitioned files,
uploading each as it progresses, and writing any remaning data and an XML
manifest when a partitioned file is closed.
The partition size defaults to 4608 MB; 4.5GB, the maximum filesize that
Swift can support.
It is possible to set a smaller partition size, in the <<<fs.swift.partsize>>>
option. This takes a value in KB.
+--
<property>
<name>fs.swift.partsize</name>
<value>1024</value>
<description>upload every MB</description>
</property>
+--
When should this value be changed from its default?
While there is no need to ever change it for basic operation of
the Swift filesystem client, it can be tuned
* If a Swift filesystem is location aware, then breaking a file up into
smaller partitions scatters the data round the cluster. For best performance,
the property <<<fs.swift.blocksize>>> should be set to a smaller value than the
partition size of files.
* When writing to an unpartitioned file, the entire write is done in the
<<<close()>>> operation. When a file is partitioned, the outstanding data to
be written whenever the outstanding amount of data is greater than the
partition size. This means that data will be written more incrementally
*** Request size fs.swift.requestsize
The Swift filesystem client reads files in HTTP GET operations, asking for
a block of data at a time.
The default value is 64KB. A larger value may be more efficient over faster
networks, as it reduces the overhead of setting up the HTTP operation.
However, if the file is read with many random accesses, requests for
data will be made from different parts of the file -discarding some of the
previously requested data. The benefits of larger request sizes may be wasted.
The property <<<fs.swift.requestsize>>> sets the request size in KB.
+--
<property>
<name>fs.swift.requestsize</name>
<value>128</value>
</property>
+--
*** Connection timeout fs.swift.connect.timeout
This sets the timeout in milliseconds to connect to a Swift service.
+--
<property>
<name>fs.swift.connect.timeout</name>
<value>15000</value>
</property>
+--
A shorter timeout means that connection failures are raised faster -but
may trigger more false alarms. A longer timeout is more resilient to network
problems -and may be needed when talking to remote filesystems.
*** Connection timeout fs.swift.socket.timeout
This sets the timeout in milliseconds to wait for data from a connected socket.
+--
<property>
<name>fs.swift.socket.timeout</name>
<value>60000</value>
</property>
+--
A shorter timeout means that connection failures are raised faster -but
may trigger more false alarms. A longer timeout is more resilient to network
problems -and may be needed when talking to remote filesystems.
*** Connection Retry Count fs.swift.connect.retry.count
This sets the number of times to try to connect to a service whenever
an HTTP request is made.
+--
<property>
<name>fs.swift.connect.retry.count</name>
<value>3</value>
</property>
+--
The more retries, the more resilient it is to transient outages -and the
less rapid it is at detecting and reporting server connectivity problems.
*** Connection Throttle Delay fs.swift.connect.throttle.delay
This property adds a delay between bulk file copy and delete operations,
to prevent requests being throttled or blocked by the remote service
+--
<property>
<name>fs.swift.connect.throttle.delay</name>
<value>0</value>
</property>
+--
It is measured in milliseconds; "0" means do not add any delay.
Throttling is enabled on the public endpoints of some Swift services.
If <<<rename()>>> or <<<delete()>>> operations fail with
<<<SwiftThrottledRequestException>>>
exceptions, try setting this property.
*** HTTP Proxy
If the client can only access the Swift filesystem via a web proxy
server, the client configuration must specify the proxy via
the <<<fs.swift.connect.proxy.host>>> and <<<fs.swift.connect.proxy.port>>>
properties.
+--
<property>
<name>fs.swift.proxy.host</name>
<value>web-proxy</value>
</property>
<property>
<name>fs.swift.proxy.port</name>
<value>8088</value>
</property>
+--
If the host is declared, the proxy port must be set to a valid integer value.
** Troubleshooting
*** ClassNotFoundException
The <<<hadoop-openstack>>> JAR -or any dependencies- may not be on your classpath.
If it is a remote MapReduce job that is failing, make sure that the JAR is
installed on the servers in the cluster -or that the job submission process
uploads the JAR file to the distributed cache.
*** Failure to Authenticate
A <<<SwiftAuthenticationFailedException>>> is thrown when the client
cannot authenticate with the OpenStack keystone server. This could be
because the URL in the service definition is wrong, or because
the supplied credentials are invalid.
[[1]] Check the authentication URL through <<<curl>>> or your browser
[[1]] Use a Swift client such as CyberDuck to validate your credentials
[[1]] If you have included a tenant ID, try leaving it out. Similarly,
try adding it if you had not included it.
[[1]] Try switching from API key authentication to password-based authentication,
by setting the password.
[[1]] Change your credentials. As with Amazon AWS clients, some credentials
don't seem to like going over the network.
*** Timeout connecting to the Swift Service
This happens if the client application is running outside an OpenStack cluster,
where it does not have access to the private hostname/IP address for filesystem
operations. Set the <<<public>>> flag to true -but remember to set it to false
for use in-cluster.
** Warnings
[[1]] Do not share your login details with anyone, which means do not log the
details, or check the XML configuration files into any revision control system
to which you do not have exclusive access.
[[1]] Similarly, do not use your real account details in any documentation *or any
bug reports submitted online*
[[1]] Prefer the apikey authentication over passwords as it is easier
to revoke a key -and some service providers allow you to set
an automatic expiry date on a key when issued.
[[1]] Do not use the public service endpoint from within a public OpenStack
cluster, as it will run up large bills.
[[1]] Remember: it's not a real filesystem or hierarchical directory structure.
Some operations (directory rename and delete) take time and are not atomic or
isolated from other operations taking place.
[[1]] Append is not supported.
[[1]] Unix-style permissions are not supported. All accounts with write access to
a repository have unlimited access; the same goes for those with read access.
[[1]] In the public clouds, do not make the containers public unless you are happy
with anyone reading your data, and are prepared to pay the costs of their
downloads.
** Limits
* Maximum length of an object path: 1024 characters
* Maximum size of a binary object: no absolute limit. Files > 5GB are
partitioned into separate files in the native filesystem, and merged during
retrieval. <Warning:> the partitioned/large file support is the
most complex part of the Hadoop/Swift FS integration, and, along with
authentication, the most troublesome to support.
** Testing the hadoop-openstack module
The <<<hadoop-openstack>>> can be remotely tested against any public
or private cloud infrastructure which supports the OpenStack Keystone
authentication mechanism. It can also be tested against private
OpenStack clusters. OpenStack Development teams are strongly encouraged to test
the Hadoop swift filesystem client against any version of Swift that they
are developing or deploying, to stress their cluster and to identify
bugs early.
The module comes with a large suite of JUnit tests -tests that are
only executed if the source tree includes credentials to test against a
specific cluster.
After checking out the Hadoop source tree, create the file:
+--
hadoop-tools/hadoop-openstack/src/test/resources/auth-keys.xml
+--
Into this file, insert the credentials needed to bond to the test filesystem,
as described above.
Next set the property <<<test.fs.swift.name>>> to the URL of a
swift container to test against. The tests expect exclusive access
to this container -do not keep any other data on it, or expect it
to be preserved.
+--
<property>
<name>test.fs.swift.name</name>
<value>swift://test.myswift/</value>
</property>
+--
In the base hadoop directory, run:
+--
mvn clean install -DskipTests
+--
This builds a set of Hadoop JARs consistent with the <<<hadoop-openstack>>>
module that is about to be tested.
In the <<<hadoop-tools/hadoop-openstack>>> directory run
+--
mvn test -Dtest=TestSwiftRestClient
+--
This runs some simple tests which include authenticating
against the remote swift service. If these tests fail, so will all
the rest. If it does fail: check your authentication.
Once this test succeeds, you can run the full test suite
+--
mvn test
+--
Be advised that these tests can take an hour or more, especially against a
remote Swift service -or one that throttles bulk operations.
Once the <<<auth-keys.xml>>> file is in place, the <<<mvn test>>> runs from
the Hadoop source base directory will automatically run these OpenStack tests
While this ensures that no regressions have occurred, it can also add significant
time to test runs, and may run up bills, depending on who is providing\
the Swift storage service. We recommend having a separate source tree
set up purely for the Swift tests, and running it manually or by the CI tooling
at a lower frequency than normal test runs.
Finally: Apache Hadoop is an open source project. Contributions of code
-including more tests- are very welcome.

View File

@ -1,46 +0,0 @@
<!--
~ Licensed to the Apache Software Foundation (ASF) under one
~ or more contributor license agreements. See the NOTICE file
~ distributed with this work for additional information
~ regarding copyright ownership. The ASF licenses this file
~ to you under the Apache License, Version 2.0 (the
~ "License"); you may not use this file except in compliance
~ with the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<project name="Apache Hadoop ${project.version}">
<skin>
<groupId>org.apache.maven.skins</groupId>
<artifactId>maven-stylus-skin</artifactId>
<version>1.2</version>
</skin>
<body>
<links>
<item name="Apache Hadoop" href="http://hadoop.apache.org/"/>
</links>
</body>
</project>

View File

@ -1,31 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.PathFilter;
/**
* A path filter that accepts everything
*/
public class AcceptAllFilter implements PathFilter {
@Override
public boolean accept(Path file) {
return true;
}
}

View File

@ -1,400 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.swift.exceptions.SwiftOperationFailedException;
import org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem;
import org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore;
import org.apache.hadoop.fs.swift.util.DurationStats;
import org.apache.hadoop.fs.swift.util.SwiftTestUtils;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Assert;
import org.junit.Assume;
import org.junit.Before;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.OutputStream;
import java.net.URI;
import java.net.URISyntaxException;
import java.util.List;
import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.assertPathExists;
import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.cleanupInTeardown;
import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.getServiceURI;
import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.noteAction;
/**
* This is the base class for most of the Swift tests
*/
public class SwiftFileSystemBaseTest extends Assert implements
SwiftTestConstants {
protected static final Log LOG =
LogFactory.getLog(SwiftFileSystemBaseTest.class);
protected SwiftNativeFileSystem fs;
protected static SwiftNativeFileSystem lastFs;
protected byte[] data = SwiftTestUtils.dataset(getBlockSize() * 2, 0, 255);
private Configuration conf;
@Before
public void setUp() throws Exception {
noteAction("setup");
final URI uri = getFilesystemURI();
conf = createConfiguration();
fs = createSwiftFS();
try {
fs.initialize(uri, conf);
} catch (IOException e) {
//FS init failed, set it to null so that teardown doesn't
//attempt to use it
fs = null;
throw e;
}
//remember the last FS
lastFs = fs;
noteAction("setup complete");
}
/**
* Configuration generator. May be overridden to inject
* some custom options
* @return a configuration with which to create FS instances
*/
protected Configuration createConfiguration() {
return new Configuration();
}
@After
public void tearDown() throws Exception {
cleanupInTeardown(fs, "/test");
}
@AfterClass
public static void classTearDown() throws Exception {
if (lastFs != null) {
List<DurationStats> statistics = lastFs.getOperationStatistics();
for (DurationStats stat : statistics) {
LOG.info(stat.toString());
}
}
}
/**
* Get the configuration used to set up the FS
* @return the configuration
*/
public Configuration getConf() {
return conf;
}
/**
* Describe the test, combining some logging with details
* for people reading the code
*
* @param description test description
*/
protected void describe(String description) {
noteAction(description);
}
protected URI getFilesystemURI() throws URISyntaxException, IOException {
return getServiceURI(createConfiguration());
}
protected SwiftNativeFileSystem createSwiftFS() throws IOException {
SwiftNativeFileSystem swiftNativeFileSystem =
new SwiftNativeFileSystem();
return swiftNativeFileSystem;
}
protected int getBlockSize() {
return 1024;
}
/**
* Is rename supported?
* @return true
*/
protected boolean renameSupported() {
return true;
}
/**
* assume in a test that rename is supported;
* skip it if not
*/
protected void assumeRenameSupported() {
Assume.assumeTrue(renameSupported());
}
/**
* Take an unqualified path, and qualify it w.r.t the
* current filesystem
* @param pathString source path
* @return a qualified path instance
*/
protected Path path(String pathString) {
return new Path(pathString).makeQualified(fs);
}
/**
* Get the filesystem
* @return the current FS
*/
public SwiftNativeFileSystem getFs() {
return fs;
}
/**
* Create a file using the standard {@link #data} bytes.
*
* @param path path to write
* @throws IOException on any problem
*/
protected void createFile(Path path) throws IOException {
createFile(path, data);
}
/**
* Create a file with the given data.
*
* @param path path to write
* @param sourceData source dataset
* @throws IOException on any problem
*/
protected void createFile(Path path, byte[] sourceData) throws IOException {
FSDataOutputStream out = fs.create(path);
out.write(sourceData, 0, sourceData.length);
out.close();
}
/**
* Create and then close a file
* @param path path to create
* @throws IOException on a failure
*/
protected void createEmptyFile(Path path) throws IOException {
FSDataOutputStream out = fs.create(path);
out.close();
}
/**
* Get the inner store -useful for lower level operations
*
* @return the store
*/
protected SwiftNativeFileSystemStore getStore() {
return fs.getStore();
}
/**
* Rename a path
* @param src source
* @param dst dest
* @param renameMustSucceed flag to say "this rename must exist"
* @param srcExists add assert that the source exists afterwards
* @param dstExists add assert the dest exists afterwards
* @throws IOException IO trouble
*/
protected void rename(Path src, Path dst, boolean renameMustSucceed,
boolean srcExists, boolean dstExists) throws IOException {
if (renameMustSucceed) {
renameToSuccess(src, dst, srcExists, dstExists);
} else {
renameToFailure(src, dst);
}
}
/**
* Get a string describing the outcome of a rename, by listing the dest
* path and its parent along with some covering text
* @param src source patj
* @param dst dest path
* @return a string for logs and exceptions
* @throws IOException IO problems
*/
private String getRenameOutcome(Path src, Path dst) throws IOException {
String lsDst = ls(dst);
Path parent = dst.getParent();
String lsParent = parent != null ? ls(parent) : "";
return " result of " + src + " => " + dst
+ " - " + lsDst
+ " \n" + lsParent;
}
/**
* Rename, expecting an exception to be thrown
*
* @param src source
* @param dst dest
* @throws IOException a failure other than an
* expected SwiftRenameException or FileNotFoundException
*/
protected void renameToFailure(Path src, Path dst) throws IOException {
try {
getStore().rename(src, dst);
fail("Expected failure renaming " + src + " to " + dst
+ "- but got success");
} catch (SwiftOperationFailedException e) {
LOG.debug("Rename failed (expected):" + e);
} catch (FileNotFoundException e) {
LOG.debug("Rename failed (expected):" + e);
}
}
/**
* Rename to success
*
* @param src source
* @param dst dest
* @param srcExists add assert that the source exists afterwards
* @param dstExists add assert the dest exists afterwards
* @throws SwiftOperationFailedException operation failure
* @throws IOException IO problems
*/
protected void renameToSuccess(Path src, Path dst,
boolean srcExists, boolean dstExists)
throws SwiftOperationFailedException, IOException {
getStore().rename(src, dst);
String outcome = getRenameOutcome(src, dst);
assertEquals("Source " + src + "exists: " + outcome,
srcExists, fs.exists(src));
assertEquals("Destination " + dstExists + " exists" + outcome,
dstExists, fs.exists(dst));
}
/**
* List a path in the test FS
* @param path path to list
* @return the contents of the path/dir
* @throws IOException IO problems
*/
protected String ls(Path path) throws IOException {
return SwiftTestUtils.ls(fs, path);
}
/**
* assert that a path exists
* @param message message to use in an assertion
* @param path path to probe
* @throws IOException IO problems
*/
public void assertExists(String message, Path path) throws IOException {
assertPathExists(fs, message, path);
}
/**
* assert that a path does not
* @param message message to use in an assertion
* @param path path to probe
* @throws IOException IO problems
*/
public void assertPathDoesNotExist(String message, Path path) throws
IOException {
SwiftTestUtils.assertPathDoesNotExist(fs, message, path);
}
/**
* Assert that a file exists and whose {@link FileStatus} entry
* declares that this is a file and not a symlink or directory.
*
* @param filename name of the file
* @throws IOException IO problems during file operations
*/
protected void assertIsFile(Path filename) throws IOException {
SwiftTestUtils.assertIsFile(fs, filename);
}
/**
* Assert that a file exists and whose {@link FileStatus} entry
* declares that this is a file and not a symlink or directory.
*
* @throws IOException IO problems during file operations
*/
protected void mkdirs(Path path) throws IOException {
assertTrue("Failed to mkdir" + path, fs.mkdirs(path));
}
/**
* Assert that a delete succeeded
* @param path path to delete
* @param recursive recursive flag
* @throws IOException IO problems
*/
protected void assertDeleted(Path path, boolean recursive) throws IOException {
SwiftTestUtils.assertDeleted(fs, path, recursive);
}
/**
* Assert that a value is not equal to the expected value
* @param message message if the two values are equal
* @param expected expected value
* @param actual actual value
*/
protected void assertNotEqual(String message, int expected, int actual) {
assertTrue(message,
actual != expected);
}
/**
* Get the number of partitions written from the Swift Native FS APIs
* @param out output stream
* @return the number of partitioned files written by the stream
*/
protected int getPartitionsWritten(FSDataOutputStream out) {
return SwiftNativeFileSystem.getPartitionsWritten(out);
}
/**
* Assert that the no. of partitions written matches expectations
* @param action operation (for use in the assertions)
* @param out output stream
* @param expected expected no. of partitions
*/
protected void assertPartitionsWritten(String action, FSDataOutputStream out,
long expected) {
OutputStream nativeStream = out.getWrappedStream();
int written = getPartitionsWritten(out);
if(written !=expected) {
Assert.fail(action + ": " +
TestSwiftFileSystemPartitionedUploads.WRONG_PARTITION_COUNT
+ " + expected: " + expected + " actual: " + written
+ " -- " + nativeStream);
}
}
/**
* Assert that the result value == -1; which implies
* that a read was successful
* @param text text to include in a message (usually the operation)
* @param result read result to validate
*/
protected void assertMinusOne(String text, int result) {
assertEquals(text + " wrong read result " + result, -1, result);
}
}

View File

@ -1,34 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift;
/**
* Hard coded constants for the test timeouts
*/
public interface SwiftTestConstants {
/**
* Timeout for swift tests: {@value}
*/
int SWIFT_TEST_TIMEOUT = 5 * 60 * 1000;
/**
* Timeout for tests performing bulk operations: {@value}
*/
int SWIFT_BULK_IO_TEST_TIMEOUT = 12 * 60 * 1000;
}

View File

@ -1,63 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.junit.Test;
import java.net.URL;
/**
* This test just debugs which log resources are being picked up
*/
public class TestLogResources implements SwiftTestConstants {
protected static final Log LOG =
LogFactory.getLog(TestLogResources.class);
private void printf(String format, Object... args) {
String msg = String.format(format, args);
System.out.printf(msg + "\n");
LOG.info(msg);
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testWhichLog4JPropsFile() throws Throwable {
locateResource("log4j.properties");
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testWhichLog4JXMLFile() throws Throwable {
locateResource("log4j.XML");
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testCommonsLoggingProps() throws Throwable {
locateResource("commons-logging.properties");
}
private void locateResource(String resource) {
URL url = this.getClass().getClassLoader().getResource(resource);
if (url != null) {
printf("resource %s is at %s", resource, url);
} else {
printf("resource %s is not on the classpath", resource);
}
}
}

View File

@ -1,163 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.swift.http.SwiftProtocolConstants;
import org.apache.hadoop.fs.swift.util.SwiftTestUtils;
import org.apache.hadoop.io.IOUtils;
import org.junit.After;
import org.junit.Test;
/**
* Seek tests verify that
* <ol>
* <li>When you seek on a 0 byte file to byte (0), it's not an error.</li>
* <li>When you seek past the end of a file, it's an error that should
* raise -what- EOFException?</li>
* <li>when you seek forwards, you get new data</li>
* <li>when you seek backwards, you get the previous data</li>
* <li>That this works for big multi-MB files as well as small ones.</li>
* </ol>
* These may seem "obvious", but the more the input streams try to be clever
* about offsets and buffering, the more likely it is that seek() will start
* to get confused.
*/
public class TestReadPastBuffer extends SwiftFileSystemBaseTest {
protected static final Log LOG =
LogFactory.getLog(TestReadPastBuffer.class);
public static final int SWIFT_READ_BLOCKSIZE = 4096;
public static final int SEEK_FILE_LEN = SWIFT_READ_BLOCKSIZE * 2;
private Path testPath;
private Path readFile;
private Path zeroByteFile;
private FSDataInputStream instream;
/**
* Get a configuration which a small blocksize reported to callers
* @return a configuration for this test
*/
@Override
public Configuration getConf() {
Configuration conf = super.getConf();
/*
* set to 4KB
*/
conf.setInt(SwiftProtocolConstants.SWIFT_BLOCKSIZE, SWIFT_READ_BLOCKSIZE);
return conf;
}
/**
* Setup creates dirs under test/hadoop
*
* @throws Exception
*/
@Override
public void setUp() throws Exception {
super.setUp();
byte[] block = SwiftTestUtils.dataset(SEEK_FILE_LEN, 0, 255);
//delete the test directory
testPath = path("/test");
readFile = new Path(testPath, "TestReadPastBuffer.txt");
createFile(readFile, block);
}
@After
public void cleanFile() {
IOUtils.closeStream(instream);
instream = null;
}
/**
* Create a config with a 1KB request size
* @return a config
*/
@Override
protected Configuration createConfiguration() {
Configuration conf = super.createConfiguration();
conf.set(SwiftProtocolConstants.SWIFT_REQUEST_SIZE, "1");
return conf;
}
/**
* Seek past the buffer then read
* @throws Throwable problems
*/
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testSeekAndReadPastEndOfFile() throws Throwable {
instream = fs.open(readFile);
assertEquals(0, instream.getPos());
//expect that seek to 0 works
//go just before the end
instream.seek(SEEK_FILE_LEN - 2);
assertTrue("Premature EOF", instream.read() != -1);
assertTrue("Premature EOF", instream.read() != -1);
assertMinusOne("read past end of file", instream.read());
}
/**
* Seek past the buffer and attempt a read(buffer)
* @throws Throwable failures
*/
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testSeekBulkReadPastEndOfFile() throws Throwable {
instream = fs.open(readFile);
assertEquals(0, instream.getPos());
//go just before the end
instream.seek(SEEK_FILE_LEN - 1);
byte[] buffer = new byte[1];
int result = instream.read(buffer, 0, 1);
//next byte is expected to fail
result = instream.read(buffer, 0, 1);
assertMinusOne("read past end of file", result);
//and this one
result = instream.read(buffer, 0, 1);
assertMinusOne("read past end of file", result);
//now do an 0-byte read and expect it to
//to be checked first
result = instream.read(buffer, 0, 0);
assertEquals("EOF checks coming before read range check", 0, result);
}
/**
* Read past the buffer size byte by byte and verify that it refreshed
* @throws Throwable
*/
@Test
public void testReadPastBufferSize() throws Throwable {
instream = fs.open(readFile);
while (instream.read() != -1);
//here we have gone past the end of a file and its buffer. Now try again
assertMinusOne("reading after the (large) file was read: "+ instream,
instream.read());
}
}

View File

@ -1,260 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.swift.exceptions.SwiftConnectionClosedException;
import org.apache.hadoop.fs.swift.http.SwiftProtocolConstants;
import org.apache.hadoop.fs.swift.util.SwiftTestUtils;
import org.apache.hadoop.io.IOUtils;
import org.junit.After;
import org.junit.Test;
import java.io.EOFException;
import java.io.IOException;
/**
* Seek tests verify that
* <ol>
* <li>When you seek on a 0 byte file to byte (0), it's not an error.</li>
* <li>When you seek past the end of a file, it's an error that should
* raise -what- EOFException?</li>
* <li>when you seek forwards, you get new data</li>
* <li>when you seek backwards, you get the previous data</li>
* <li>That this works for big multi-MB files as well as small ones.</li>
* </ol>
* These may seem "obvious", but the more the input streams try to be clever
* about offsets and buffering, the more likely it is that seek() will start
* to get confused.
*/
public class TestSeek extends SwiftFileSystemBaseTest {
protected static final Log LOG =
LogFactory.getLog(TestSeek.class);
public static final int SMALL_SEEK_FILE_LEN = 256;
private Path testPath;
private Path smallSeekFile;
private Path zeroByteFile;
private FSDataInputStream instream;
/**
* Setup creates dirs under test/hadoop
*
* @throws Exception
*/
@Override
public void setUp() throws Exception {
super.setUp();
//delete the test directory
testPath = path("/test");
smallSeekFile = new Path(testPath, "seekfile.txt");
zeroByteFile = new Path(testPath, "zero.txt");
byte[] block = SwiftTestUtils.dataset(SMALL_SEEK_FILE_LEN, 0, 255);
//this file now has a simple rule: offset => value
createFile(smallSeekFile, block);
createEmptyFile(zeroByteFile);
}
@After
public void cleanFile() {
IOUtils.closeStream(instream);
instream = null;
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testSeekZeroByteFile() throws Throwable {
instream = fs.open(zeroByteFile);
assertEquals(0, instream.getPos());
//expect initial read to fai;
int result = instream.read();
assertMinusOne("initial byte read", result);
byte[] buffer = new byte[1];
//expect that seek to 0 works
instream.seek(0);
//reread, expect same exception
result = instream.read();
assertMinusOne("post-seek byte read", result);
result = instream.read(buffer, 0, 1);
assertMinusOne("post-seek buffer read", result);
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testBlockReadZeroByteFile() throws Throwable {
instream = fs.open(zeroByteFile);
assertEquals(0, instream.getPos());
//expect that seek to 0 works
byte[] buffer = new byte[1];
int result = instream.read(buffer, 0, 1);
assertMinusOne("block read zero byte file", result);
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testSeekReadClosedFile() throws Throwable {
instream = fs.open(smallSeekFile);
instream.close();
try {
instream.seek(0);
} catch (SwiftConnectionClosedException e) {
//expected a closed file
}
try {
instream.read();
} catch (IOException e) {
//expected a closed file
}
try {
byte[] buffer = new byte[1];
int result = instream.read(buffer, 0, 1);
} catch (IOException e) {
//expected a closed file
}
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testNegativeSeek() throws Throwable {
instream = fs.open(smallSeekFile);
assertEquals(0, instream.getPos());
try {
instream.seek(-1);
long p = instream.getPos();
LOG.warn("Seek to -1 returned a position of " + p);
int result = instream.read();
fail(
"expected an exception, got data " + result + " at a position of " + p);
} catch (IOException e) {
//bad seek -expected
}
assertEquals(0, instream.getPos());
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testSeekFile() throws Throwable {
instream = fs.open(smallSeekFile);
assertEquals(0, instream.getPos());
//expect that seek to 0 works
instream.seek(0);
int result = instream.read();
assertEquals(0, result);
assertEquals(1, instream.read());
assertEquals(2, instream.getPos());
assertEquals(2, instream.read());
assertEquals(3, instream.getPos());
instream.seek(128);
assertEquals(128, instream.getPos());
assertEquals(128, instream.read());
instream.seek(63);
assertEquals(63, instream.read());
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testSeekAndReadPastEndOfFile() throws Throwable {
instream = fs.open(smallSeekFile);
assertEquals(0, instream.getPos());
//expect that seek to 0 works
//go just before the end
instream.seek(SMALL_SEEK_FILE_LEN - 2);
assertTrue("Premature EOF", instream.read() != -1);
assertTrue("Premature EOF", instream.read() != -1);
assertMinusOne("read past end of file", instream.read());
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testSeekAndPastEndOfFileThenReseekAndRead() throws Throwable {
instream = fs.open(smallSeekFile);
//go just before the end. This may or may not fail; it may be delayed until the
//read
try {
instream.seek(SMALL_SEEK_FILE_LEN);
//if this doesn't trigger, then read() is expected to fail
assertMinusOne("read after seeking past EOF", instream.read());
} catch (EOFException expected) {
//here an exception was raised in seek
}
instream.seek(1);
assertTrue("Premature EOF", instream.read() != -1);
}
@Override
protected Configuration createConfiguration() {
Configuration conf = super.createConfiguration();
conf.set(SwiftProtocolConstants.SWIFT_REQUEST_SIZE, "1");
return conf;
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testSeekBigFile() throws Throwable {
Path testSeekFile = new Path(testPath, "bigseekfile.txt");
byte[] block = SwiftTestUtils.dataset(65536, 0, 255);
createFile(testSeekFile, block);
instream = fs.open(testSeekFile);
assertEquals(0, instream.getPos());
//expect that seek to 0 works
instream.seek(0);
int result = instream.read();
assertEquals(0, result);
assertEquals(1, instream.read());
assertEquals(2, instream.read());
//do seek 32KB ahead
instream.seek(32768);
assertEquals("@32768", block[32768], (byte) instream.read());
instream.seek(40000);
assertEquals("@40000", block[40000], (byte) instream.read());
instream.seek(8191);
assertEquals("@8191", block[8191], (byte) instream.read());
instream.seek(0);
assertEquals("@0", 0, (byte) instream.read());
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testPositionedBulkReadDoesntChangePosition() throws Throwable {
Path testSeekFile = new Path(testPath, "bigseekfile.txt");
byte[] block = SwiftTestUtils.dataset(65536, 0, 255);
createFile(testSeekFile, block);
instream = fs.open(testSeekFile);
instream.seek(39999);
assertTrue(-1 != instream.read());
assertEquals (40000, instream.getPos());
byte[] readBuffer = new byte[256];
instream.read(128, readBuffer, 0, readBuffer.length);
//have gone back
assertEquals(40000, instream.getPos());
//content is the same too
assertEquals("@40000", block[40000], (byte) instream.read());
//now verify the picked up data
for (int i = 0; i < 256; i++) {
assertEquals("@" + i, block[i + 128], readBuffer[i]);
}
}
/**
* work out the expected byte from a specific offset
* @param offset offset in the file
* @return the value
*/
int expectedByte(int offset) {
return offset & 0xff;
}
}

View File

@ -1,212 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.swift.http.SwiftRestClient;
import org.junit.Assert;
import org.junit.Test;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException;
import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.DOT_AUTH_URL;
import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.DOT_LOCATION_AWARE;
import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.DOT_PASSWORD;
import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.DOT_TENANT;
import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.DOT_USERNAME;
import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_BLOCKSIZE;
import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_CONNECTION_TIMEOUT;
import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_PARTITION_SIZE;
import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_PROXY_HOST_PROPERTY;
import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_PROXY_PORT_PROPERTY;
import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_RETRY_COUNT;
import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.SWIFT_SERVICE_PREFIX;
/**
* Test the swift service-specific configuration binding features
*/
public class TestSwiftConfig extends Assert {
public static final String SERVICE = "openstack";
@Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class)
public void testEmptyUrl() throws Exception {
final Configuration configuration = new Configuration();
set(configuration, DOT_TENANT, "tenant");
set(configuration, DOT_USERNAME, "username");
set(configuration, DOT_PASSWORD, "password");
mkInstance(configuration);
}
@Test
public void testEmptyTenant() throws Exception {
final Configuration configuration = new Configuration();
set(configuration, DOT_AUTH_URL, "http://localhost:8080");
set(configuration, DOT_USERNAME, "username");
set(configuration, DOT_PASSWORD, "password");
mkInstance(configuration);
}
@Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class)
public void testEmptyUsername() throws Exception {
final Configuration configuration = new Configuration();
set(configuration, DOT_AUTH_URL, "http://localhost:8080");
set(configuration, DOT_TENANT, "tenant");
set(configuration, DOT_PASSWORD, "password");
mkInstance(configuration);
}
@Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class)
public void testEmptyPassword() throws Exception {
final Configuration configuration = new Configuration();
set(configuration, DOT_AUTH_URL, "http://localhost:8080");
set(configuration, DOT_TENANT, "tenant");
set(configuration, DOT_USERNAME, "username");
mkInstance(configuration);
}
@Test
public void testGoodRetryCount() throws Exception {
final Configuration configuration = createCoreConfig();
configuration.set(SWIFT_RETRY_COUNT, "3");
mkInstance(configuration);
}
@Test
public void testBadRetryCount() throws Exception {
final Configuration configuration = createCoreConfig();
configuration.set(SWIFT_RETRY_COUNT, "three");
try {
mkInstance(configuration);
} catch (SwiftConfigurationException e) {
if (TestUtils.isHadoop1())
Assert.fail();
return;
}
if (!TestUtils.isHadoop1())
Assert.fail();
}
@Test
public void testBadConnectTimeout() throws Exception {
final Configuration configuration = createCoreConfig();
configuration.set(SWIFT_CONNECTION_TIMEOUT, "three");
try {
mkInstance(configuration);
} catch (SwiftConfigurationException e) {
if (TestUtils.isHadoop1())
Assert.fail();
return;
}
if (!TestUtils.isHadoop1())
Assert.fail();
}
@Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class)
public void testZeroBlocksize() throws Exception {
final Configuration configuration = createCoreConfig();
configuration.set(SWIFT_BLOCKSIZE, "0");
mkInstance(configuration);
}
@Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class)
public void testNegativeBlocksize() throws Exception {
final Configuration configuration = createCoreConfig();
configuration.set(SWIFT_BLOCKSIZE, "-1");
mkInstance(configuration);
}
@Test
public void testPositiveBlocksize() throws Exception {
final Configuration configuration = createCoreConfig();
int size = 127;
configuration.set(SWIFT_BLOCKSIZE, Integer.toString(size));
SwiftRestClient restClient = mkInstance(configuration);
assertEquals(size, restClient.getBlocksizeKB());
}
@Test
public void testLocationAwareTruePropagates() throws Exception {
final Configuration configuration = createCoreConfig();
set(configuration, DOT_LOCATION_AWARE, "true");
SwiftRestClient restClient = mkInstance(configuration);
assertTrue(restClient.isLocationAware());
}
@Test
public void testLocationAwareFalsePropagates() throws Exception {
final Configuration configuration = createCoreConfig();
set(configuration, DOT_LOCATION_AWARE, "false");
SwiftRestClient restClient = mkInstance(configuration);
assertFalse(restClient.isLocationAware());
}
@Test(expected = org.apache.hadoop.fs.swift.exceptions.SwiftConfigurationException.class)
public void testNegativePartsize() throws Exception {
final Configuration configuration = createCoreConfig();
configuration.set(SWIFT_PARTITION_SIZE, "-1");
SwiftRestClient restClient = mkInstance(configuration);
}
@Test
public void testPositivePartsize() throws Exception {
final Configuration configuration = createCoreConfig();
int size = 127;
configuration.set(SWIFT_PARTITION_SIZE, Integer.toString(size));
SwiftRestClient restClient = mkInstance(configuration);
assertEquals(size, restClient.getPartSizeKB());
}
@Test
public void testProxyData() throws Exception {
final Configuration configuration = createCoreConfig();
String proxy="web-proxy";
int port = 8088;
configuration.set(SWIFT_PROXY_HOST_PROPERTY, proxy);
configuration.set(SWIFT_PROXY_PORT_PROPERTY, Integer.toString(port));
SwiftRestClient restClient = mkInstance(configuration);
assertEquals(proxy, restClient.getProxyHost());
assertEquals(port, restClient.getProxyPort());
}
private Configuration createCoreConfig() {
final Configuration configuration = new Configuration();
set(configuration, DOT_AUTH_URL, "http://localhost:8080");
set(configuration, DOT_TENANT, "tenant");
set(configuration, DOT_USERNAME, "username");
set(configuration, DOT_PASSWORD, "password");
return configuration;
}
private void set(Configuration configuration, String field, String value) {
configuration.set(SWIFT_SERVICE_PREFIX + SERVICE + field, value);
}
private SwiftRestClient mkInstance(Configuration configuration) throws
IOException,
URISyntaxException {
URI uri = new URI("swift://container.openstack/");
return SwiftRestClient.getInstance(uri, configuration);
}
}

View File

@ -1,295 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.hadoop.fs.swift;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.swift.exceptions.SwiftBadRequestException;
import org.apache.hadoop.fs.swift.exceptions.SwiftNotDirectoryException;
import org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem;
import org.apache.hadoop.fs.swift.util.SwiftTestUtils;
import org.junit.Test;
import java.io.FileNotFoundException;
import java.io.IOException;
import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.assertFileHasLength;
import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.assertIsDirectory;
import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.readBytesToString;
import static org.apache.hadoop.fs.swift.util.SwiftTestUtils.writeTextFile;
/**
* Test basic filesystem operations.
* Many of these are similar to those in {@link TestSwiftFileSystemContract}
* -this is a JUnit4 test suite used to initially test the Swift
* component. Once written, there's no reason not to retain these tests.
*/
public class TestSwiftFileSystemBasicOps extends SwiftFileSystemBaseTest {
private static final Log LOG =
LogFactory.getLog(TestSwiftFileSystemBasicOps.class);
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testLsRoot() throws Throwable {
Path path = new Path("/");
FileStatus[] statuses = fs.listStatus(path);
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testMkDir() throws Throwable {
Path path = new Path("/test/MkDir");
fs.mkdirs(path);
//success then -so try a recursive operation
fs.delete(path, true);
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testDeleteNonexistentFile() throws Throwable {
Path path = new Path("/test/DeleteNonexistentFile");
assertFalse("delete returned true", fs.delete(path, false));
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testPutFile() throws Throwable {
Path path = new Path("/test/PutFile");
Exception caught = null;
writeTextFile(fs, path, "Testing a put to a file", false);
assertDeleted(path, false);
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testPutGetFile() throws Throwable {
Path path = new Path("/test/PutGetFile");
try {
String text = "Testing a put and get to a file "
+ System.currentTimeMillis();
writeTextFile(fs, path, text, false);
String result = readBytesToString(fs, path, text.length());
assertEquals(text, result);
} finally {
delete(fs, path);
}
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testPutDeleteFileInSubdir() throws Throwable {
Path path =
new Path("/test/PutDeleteFileInSubdir/testPutDeleteFileInSubdir");
String text = "Testing a put and get to a file in a subdir "
+ System.currentTimeMillis();
writeTextFile(fs, path, text, false);
assertDeleted(path, false);
//now delete the parent that should have no children
assertDeleted(new Path("/test/PutDeleteFileInSubdir"), false);
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testRecursiveDelete() throws Throwable {
Path childpath =
new Path("/test/testRecursiveDelete");
String text = "Testing a put and get to a file in a subdir "
+ System.currentTimeMillis();
writeTextFile(fs, childpath, text, false);
//now delete the parent that should have no children
assertDeleted(new Path("/test"), true);
assertFalse("child entry still present " + childpath, fs.exists(childpath));
}
private void delete(SwiftNativeFileSystem fs, Path path) {
try {
if (!fs.delete(path, false)) {
LOG.warn("Failed to delete " + path);
}
} catch (IOException e) {
LOG.warn("deleting " + path, e);
}
}
private void deleteR(SwiftNativeFileSystem fs, Path path) {
try {
if (!fs.delete(path, true)) {
LOG.warn("Failed to delete " + path);
}
} catch (IOException e) {
LOG.warn("deleting " + path, e);
}
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testOverwrite() throws Throwable {
Path path = new Path("/test/Overwrite");
try {
String text = "Testing a put to a file "
+ System.currentTimeMillis();
writeTextFile(fs, path, text, false);
assertFileHasLength(fs, path, text.length());
String text2 = "Overwriting a file "
+ System.currentTimeMillis();
writeTextFile(fs, path, text2, true);
assertFileHasLength(fs, path, text2.length());
String result = readBytesToString(fs, path, text2.length());
assertEquals(text2, result);
} finally {
delete(fs, path);
}
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testOverwriteDirectory() throws Throwable {
Path path = new Path("/test/testOverwriteDirectory");
try {
fs.mkdirs(path.getParent());
String text = "Testing a put to a file "
+ System.currentTimeMillis();
writeTextFile(fs, path, text, false);
assertFileHasLength(fs, path, text.length());
} finally {
delete(fs, path);
}
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testFileStatus() throws Throwable {
Path path = new Path("/test/FileStatus");
try {
String text = "Testing File Status "
+ System.currentTimeMillis();
writeTextFile(fs, path, text, false);
SwiftTestUtils.assertIsFile(fs, path);
} finally {
delete(fs, path);
}
}
/**
* Assert that a newly created directory is a directory
*
* @throws Throwable if not, or if something else failed
*/
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testDirStatus() throws Throwable {
Path path = new Path("/test/DirStatus");
try {
fs.mkdirs(path);
assertIsDirectory(fs, path);
} finally {
delete(fs, path);
}
}
/**
* Assert that if a directory that has children is deleted, it is still
* a directory
*
* @throws Throwable if not, or if something else failed
*/
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testDirStaysADir() throws Throwable {
Path path = new Path("/test/dirStaysADir");
Path child = new Path(path, "child");
try {
//create the dir
fs.mkdirs(path);
//assert the parent has the directory nature
assertIsDirectory(fs, path);
//create the child dir
writeTextFile(fs, child, "child file", true);
//assert the parent has the directory nature
assertIsDirectory(fs, path);
//now rm the child
delete(fs, child);
} finally {
deleteR(fs, path);
}
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testCreateMultilevelDir() throws Throwable {
Path base = new Path("/test/CreateMultilevelDir");
Path path = new Path(base, "1/2/3");
fs.mkdirs(path);
assertExists("deep multilevel dir not created", path);
fs.delete(base, true);
assertPathDoesNotExist("Multilevel delete failed", path);
assertPathDoesNotExist("Multilevel delete failed", base);
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testCreateDirWithFileParent() throws Throwable {
Path path = new Path("/test/CreateDirWithFileParent");
Path child = new Path(path, "subdir/child");
fs.mkdirs(path.getParent());
try {
//create the child dir
writeTextFile(fs, path, "parent", true);
try {
fs.mkdirs(child);
} catch (SwiftNotDirectoryException expected) {
LOG.debug("Expected Exception", expected);
}
} finally {
fs.delete(path, true);
}
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testLongObjectNamesForbidden() throws Throwable {
StringBuilder buffer = new StringBuilder(1200);
buffer.append("/");
for (int i = 0; i < (1200 / 4); i++) {
buffer.append(String.format("%04x", i));
}
String pathString = buffer.toString();
Path path = new Path(pathString);
try {
writeTextFile(fs, path, pathString, true);
//if we get here, problems.
fs.delete(path, false);
fail("Managed to create an object with a name of length "
+ pathString.length());
} catch (SwiftBadRequestException e) {
//expected
//LOG.debug("Caught exception " + e, e);
}
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testLsNonExistentFile() throws Exception {
try {
Path path = new Path("/test/hadoop/file");
FileStatus[] statuses = fs.listStatus(path);
fail("Should throw FileNotFoundException on " + path
+ " but got list of length " + statuses.length);
} catch (FileNotFoundException fnfe) {
// expected
}
}
@Test(timeout = SWIFT_TEST_TIMEOUT)
public void testExistsRoot() throws Exception {
Path path = new Path("/");
assertTrue("exists('/') returned false", fs.exists(path));
}
}

Some files were not shown because too many files have changed in this diff Show More