Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YARN-10494 CLI tool for docker-to-squashfs conversion (pure Java). #2513

Open
wants to merge 5 commits into
base: trunk
Choose a base branch
from

Conversation

craigcondit
Copy link

Initial WIP PR for YARN-10494.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 5s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 2s No case conflicting files found.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 0m 0s test4tests The patch appears to include 68 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 34m 58s trunk passed
+1 💚 compile 2m 58s trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 compile 2m 13s trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 checkstyle 0m 41s trunk passed
+1 💚 mvnsite 2m 17s trunk passed
+1 💚 shadedclient 20m 0s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 1m 41s trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javadoc 1m 23s trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+0 🆗 spotbugs 5m 40s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 5m 36s trunk passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 41s Maven dependency ordering for patch
+1 💚 mvninstall 2m 47s the patch passed
+1 💚 compile 3m 5s the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javac 3m 5s the patch passed
+1 💚 compile 2m 12s the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 javac 2m 12s the patch passed
-0 ⚠️ checkstyle 0m 44s /diff-checkstyle-hadoop-tools.txt hadoop-tools: The patch generated 399 new + 0 unchanged - 0 fixed = 399 total (was 0)
+1 💚 mvnsite 2m 41s the patch passed
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 xml 0m 2s The patch has no ill-formed XML file.
-1 ❌ shadedclient 2m 22s patch has errors when building and testing our client artifacts.
+1 💚 javadoc 2m 4s the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javadoc 1m 40s the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
-1 ❌ findbugs 5m 51s /new-findbugs-hadoop-tools.html hadoop-tools generated 25 new + 0 unchanged - 0 fixed = 25 total (was 0)
_ Other Tests _
+1 💚 unit 0m 27s hadoop-runc in the patch passed.
-1 ❌ unit 54m 23s /patch-unit-hadoop-tools.txt hadoop-tools in the patch passed.
-1 ❌ asflicense 0m 29s /patch-asflicense-problems.txt The patch generated 6 ASF License warnings.
153m 38s
Reason Tests
FindBugs module:hadoop-tools
Self assignment of field DockerClient.manifestChooser in org.apache.hadoop.runc.docker.DockerClient.setManifestChooser(ManifestChooser) At DockerClient.java:in org.apache.hadoop.runc.docker.DockerClient.setManifestChooser(ManifestChooser) At DockerClient.java:[line 94]
Inconsistent synchronization of org.apache.hadoop.runc.docker.auth.BearerCredentials.token; locked 66% of time Unsynchronized access at BearerCredentials.java:66% of time Unsynchronized access at BearerCredentials.java:[line 65]
Class org.apache.hadoop.runc.docker.auth.BearerScheme defines non-transient non-serializable instance field client In BearerScheme.java:instance field client In BearerScheme.java
org.apache.hadoop.runc.docker.model.ManifestListV2.CONTENT_TYPE isn't final but should be At ManifestListV2.java:be At ManifestListV2.java:[line 14]
org.apache.hadoop.runc.docker.model.ManifestV2.CONTENT_TYPE isn't final but should be At ManifestV2.java:be At ManifestV2.java:[line 14]
Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.runc.squashfs.SquashFsTree.build() At SquashFsTree.java:then immediately reboxed in org.apache.hadoop.runc.squashfs.SquashFsTree.build() At SquashFsTree.java:[line 140]
org.apache.hadoop.runc.squashfs.data.DataBlock.getData() may expose internal representation by returning DataBlock.data At DataBlock.java:by returning DataBlock.data At DataBlock.java:[line 28]
new org.apache.hadoop.runc.squashfs.data.DataBlock(byte[], int, int) may expose internal representation by storing an externally mutable object into DataBlock.data At DataBlock.java:expose internal representation by storing an externally mutable object into DataBlock.data At DataBlock.java:[line 44]
Increment of volatile field org.apache.hadoop.runc.squashfs.data.DataBlockCache.cacheHits in org.apache.hadoop.runc.squashfs.data.DataBlockCache.get(DataBlockCache$Key) At DataBlockCache.java:in org.apache.hadoop.runc.squashfs.data.DataBlockCache.get(DataBlockCache$Key) At DataBlockCache.java:[line 53]
Increment of volatile field org.apache.hadoop.runc.squashfs.data.DataBlockCache.cacheMisses in org.apache.hadoop.runc.squashfs.data.DataBlockCache.get(DataBlockCache$Key) At DataBlockCache.java:in org.apache.hadoop.runc.squashfs.data.DataBlockCache.get(DataBlockCache$Key) At DataBlockCache.java:[line 47]
org.apache.hadoop.runc.squashfs.directory.DirectoryEntry.getName() may expose internal representation by returning DirectoryEntry.name At DirectoryEntry.java:by returning DirectoryEntry.name At DirectoryEntry.java:[line 67]
org.apache.hadoop.runc.squashfs.inode.BasicFileINode.getBlockSizes() may expose internal representation by returning BasicFileINode.blockSizes At BasicFileINode.java:by returning BasicFileINode.blockSizes At BasicFileINode.java:[line 190]
org.apache.hadoop.runc.squashfs.inode.BasicFileINode.setBlockSizes(int[]) may expose internal representation by storing an externally mutable object into BasicFileINode.blockSizes At BasicFileINode.java:by storing an externally mutable object into BasicFileINode.blockSizes At BasicFileINode.java:[line 195]
org.apache.hadoop.runc.squashfs.inode.BasicSymlinkINode.getTargetPath() may expose internal representation by returning BasicSymlinkINode.targetPath At BasicSymlinkINode.java:by returning BasicSymlinkINode.targetPath At BasicSymlinkINode.java:[line 68]
org.apache.hadoop.runc.squashfs.inode.ExtendedFileINode.getBlockSizes() may expose internal representation by returning ExtendedFileINode.blockSizes At ExtendedFileINode.java:by returning ExtendedFileINode.blockSizes At ExtendedFileINode.java:[line 85]
org.apache.hadoop.runc.squashfs.inode.ExtendedFileINode.setBlockSizes(int[]) may expose internal representation by storing an externally mutable object into ExtendedFileINode.blockSizes At ExtendedFileINode.java:by storing an externally mutable object into ExtendedFileINode.blockSizes At ExtendedFileINode.java:[line 90]
org.apache.hadoop.runc.squashfs.inode.ExtendedSymlinkINode.getTargetPath() may expose internal representation by returning ExtendedSymlinkINode.targetPath At ExtendedSymlinkINode.java:by returning ExtendedSymlinkINode.targetPath At ExtendedSymlinkINode.java:[line 53]
new org.apache.hadoop.runc.squashfs.metadata.MemoryMetadataBlockReader(int, SuperBlock, byte[], int, int) may expose internal representation by storing an externally mutable object into MemoryMetadataBlockReader.data At MemoryMetadataBlockReader.java:int) may expose internal representation by storing an externally mutable object into MemoryMetadataBlockReader.data At MemoryMetadataBlockReader.java:[line 45]
org.apache.hadoop.runc.squashfs.metadata.MetadataBlock.getData() may expose internal representation by returning MetadataBlock.data At MetadataBlock.java:by returning MetadataBlock.data At MetadataBlock.java:[line 63]
new org.apache.hadoop.runc.squashfs.table.MemoryTableReader(SuperBlock, byte[], int, int) may expose internal representation by storing an externally mutable object into MemoryTableReader.data At MemoryTableReader.java:may expose internal representation by storing an externally mutable object into MemoryTableReader.data At MemoryTableReader.java:[line 41]
Boxing/unboxing to parse a primitive org.apache.hadoop.runc.squashfs.util.SquashDebug.run(String[]) At SquashDebug.java:org.apache.hadoop.runc.squashfs.util.SquashDebug.run(String[]) At SquashDebug.java:[line 195]
Boxing/unboxing to parse a primitive org.apache.hadoop.runc.squashfs.util.SquashDebug.run(String[]) At SquashDebug.java:org.apache.hadoop.runc.squashfs.util.SquashDebug.run(String[]) At SquashDebug.java:[line 194]
Possible null pointer dereference in org.apache.hadoop.runc.tools.ImportDockerImage.deleteRecursive(File) due to return value of called method Dereferenced at ImportDockerImage.java:org.apache.hadoop.runc.tools.ImportDockerImage.deleteRecursive(File) due to return value of called method Dereferenced at ImportDockerImage.java:[line 472]
Exceptional return value of java.io.File.delete() ignored in org.apache.hadoop.runc.tools.ImportDockerImage.deleteRecursive(File) At ImportDockerImage.java:ignored in org.apache.hadoop.runc.tools.ImportDockerImage.deleteRecursive(File) At ImportDockerImage.java:[line 476]
Exceptional return value of java.io.File.mkdirs() ignored in org.apache.hadoop.runc.tools.ImportDockerImage.importDockerImage(String, String) At ImportDockerImage.java:ignored in org.apache.hadoop.runc.tools.ImportDockerImage.importDockerImage(String, String) At ImportDockerImage.java:[line 201]
Failed junit tests hadoop.tools.dynamometer.TestDynamometerInfra
Subsystem Report/Notes
Docker ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/1/artifact/out/Dockerfile
GITHUB PR #2513
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle
uname Linux 4c6b91f6eada 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 2b5b556
Default Java Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/1/testReport/
Max. process+thread count 999 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-runc hadoop-tools U: hadoop-tools
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/1/console
versions git=2.17.1 maven=3.6.0 findbugs=4.0.6
Powered by Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 9s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 2s No case conflicting files found.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 0m 0s test4tests The patch appears to include 69 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 39m 30s trunk passed
+1 💚 compile 4m 6s trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 compile 2m 52s trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 checkstyle 0m 52s trunk passed
+1 💚 mvnsite 3m 8s trunk passed
+1 💚 shadedclient 24m 6s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 2m 14s trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javadoc 1m 49s trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+0 🆗 spotbugs 7m 29s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 7m 22s trunk passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 27s Maven dependency ordering for patch
+1 💚 mvninstall 3m 45s the patch passed
+1 💚 compile 4m 10s the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javac 4m 10s the patch passed
+1 💚 compile 3m 2s the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 javac 3m 2s the patch passed
+1 💚 checkstyle 0m 42s the patch passed
+1 💚 mvnsite 3m 23s the patch passed
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 xml 0m 6s The patch has no ill-formed XML file.
+1 💚 shadedclient 20m 13s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 2m 50s the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javadoc 2m 22s the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 findbugs 8m 43s the patch passed
_ Other Tests _
+1 💚 unit 0m 35s hadoop-runc in the patch passed.
-1 ❌ unit 0m 35s /patch-unit-hadoop-tools.txt hadoop-tools in the patch failed.
+0 🆗 asflicense 0m 16s ASF License check generated no output?
138m 34s
Subsystem Report/Notes
Docker ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/3/artifact/out/Dockerfile
GITHUB PR #2513
Optional Tests dupname asflicense xml compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux c2e5a5f03cb5 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / db73e99
Default Java Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/3/testReport/
Max. process+thread count 510 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-runc hadoop-tools U: hadoop-tools
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/3/console
versions git=2.17.1 maven=3.6.0 findbugs=4.0.6
Powered by Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 31s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 4s No case conflicting files found.
+1 💚 @author 0m 1s The patch does not contain any @author tags.
+1 💚 0m 0s test4tests The patch appears to include 69 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 32m 16s trunk passed
+1 💚 compile 3m 3s trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 compile 2m 18s trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 checkstyle 0m 44s trunk passed
+1 💚 mvnsite 2m 23s trunk passed
+1 💚 shadedclient 18m 30s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 1m 48s trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javadoc 1m 29s trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+0 🆗 spotbugs 5m 35s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 5m 32s trunk passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 47s Maven dependency ordering for patch
+1 💚 mvninstall 2m 53s the patch passed
+1 💚 compile 3m 2s the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javac 3m 2s the patch passed
+1 💚 compile 2m 18s the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 javac 2m 18s the patch passed
+1 💚 checkstyle 0m 38s the patch passed
+1 💚 mvnsite 2m 48s the patch passed
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 xml 0m 6s The patch has no ill-formed XML file.
+1 💚 shadedclient 14m 52s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 2m 13s the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javadoc 1m 53s the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 findbugs 6m 28s the patch passed
_ Other Tests _
+1 💚 unit 0m 34s hadoop-runc in the patch passed.
-1 ❌ unit 55m 42s /patch-unit-hadoop-tools.txt hadoop-tools in the patch passed.
-1 ❌ asflicense 0m 39s /patch-asflicense-problems.txt The patch generated 1 ASF License warnings.
164m 34s
Reason Tests
Failed junit tests hadoop.tools.dynamometer.TestDynamometerInfra
Subsystem Report/Notes
Docker ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/2/artifact/out/Dockerfile
GITHUB PR #2513
Optional Tests dupname asflicense xml compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux d275b43562c5 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / db73e99
Default Java Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/2/testReport/
Max. process+thread count 1072 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-runc hadoop-tools U: hadoop-tools
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/2/console
versions git=2.17.1 maven=3.6.0 findbugs=4.0.6
Powered by Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 10s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 3s No case conflicting files found.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 0m 0s test4tests The patch appears to include 69 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 36m 18s trunk passed
+1 💚 compile 3m 4s trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 compile 2m 12s trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 checkstyle 0m 40s trunk passed
+1 💚 mvnsite 2m 17s trunk passed
+1 💚 shadedclient 20m 26s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 1m 41s trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javadoc 1m 23s trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+0 🆗 spotbugs 5m 44s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 5m 40s trunk passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 25s Maven dependency ordering for patch
+1 💚 mvninstall 2m 49s the patch passed
+1 💚 compile 3m 3s the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javac 3m 3s the patch passed
+1 💚 compile 2m 13s the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 javac 2m 13s the patch passed
+1 💚 checkstyle 0m 35s the patch passed
+1 💚 mvnsite 2m 37s the patch passed
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 xml 0m 5s The patch has no ill-formed XML file.
+1 💚 shadedclient 16m 56s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 2m 4s the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javadoc 1m 45s the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 findbugs 6m 44s the patch passed
_ Other Tests _
-1 ❌ unit 0m 30s /patch-unit-hadoop-tools_hadoop-runc.txt hadoop-runc in the patch passed.
-1 ❌ unit 54m 23s /patch-unit-hadoop-tools.txt hadoop-tools in the patch passed.
+1 💚 asflicense 0m 33s The patch does not generate ASF License warnings.
169m 59s
Reason Tests
Failed junit tests hadoop.runc.squashfs.TestSquashFsInterop
hadoop.runc.squashfs.TestSquashFsInterop
hadoop.tools.dynamometer.TestDynamometerInfra
Subsystem Report/Notes
Docker ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/4/artifact/out/Dockerfile
GITHUB PR #2513
Optional Tests dupname asflicense xml compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux 32dcfb00bb3d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / e2c1268
Default Java Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/4/testReport/
Max. process+thread count 976 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-runc hadoop-tools U: hadoop-tools
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/4/console
versions git=2.17.1 maven=3.6.0 findbugs=4.0.6
Powered by Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 7s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 3s No case conflicting files found.
+1 💚 @author 0m 1s The patch does not contain any @author tags.
+1 💚 0m 0s test4tests The patch appears to include 69 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 34m 50s trunk passed
+1 💚 compile 3m 2s trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 compile 2m 12s trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 checkstyle 0m 39s trunk passed
+1 💚 mvnsite 2m 16s trunk passed
+1 💚 shadedclient 20m 9s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 1m 41s trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javadoc 1m 23s trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+0 🆗 spotbugs 5m 44s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 5m 40s trunk passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 23s Maven dependency ordering for patch
+1 💚 mvninstall 2m 49s the patch passed
+1 💚 compile 3m 5s the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javac 3m 5s the patch passed
+1 💚 compile 2m 15s the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 javac 2m 15s the patch passed
+1 💚 checkstyle 0m 35s the patch passed
+1 💚 mvnsite 2m 39s the patch passed
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 xml 0m 5s The patch has no ill-formed XML file.
+1 💚 shadedclient 16m 58s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 2m 4s the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
+1 💚 javadoc 1m 46s the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
+1 💚 findbugs 6m 46s the patch passed
_ Other Tests _
+1 💚 unit 0m 29s hadoop-runc in the patch passed.
-1 ❌ unit 54m 24s /patch-unit-hadoop-tools.txt hadoop-tools in the patch passed.
+1 💚 asflicense 0m 34s The patch does not generate ASF License warnings.
168m 37s
Reason Tests
Failed junit tests hadoop.tools.dynamometer.TestDynamometerInfra
Subsystem Report/Notes
Docker ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/5/artifact/out/Dockerfile
GITHUB PR #2513
Optional Tests dupname asflicense xml compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux 71ca4a00ebfb 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 7dda804
Default Java Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/5/testReport/
Max. process+thread count 958 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-runc hadoop-tools U: hadoop-tools
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/5/console
versions git=2.17.1 maven=3.6.0 findbugs=4.0.6
Powered by Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org

This message was automatically generated.

@craigcondit
Copy link
Author

Test failure appears to be unrelated. @ericbadger, would you be willing to do a review? I know it's a lot of code.

@ericbadger
Copy link
Contributor

I'd be happy to do a review. It will likely take awhile though. As you said, it's quite a bit of code

@ericbadger
Copy link
Contributor

I've been able to run the tool locally and it seems to work as designed. At least from my initial testing. However, I found that the tool runs quite a bit slower than the docker-to-squash python script. Notably, I ran both this tool and the docker-to-squash tool on a fairly large image (14.8 GB, 32 layers) and it took 38:13 to run on this tool while taking 21:50 to run on docker-to-squash. I'm currently trying to figure out where the differences are that make this tool take so much longer.

My first thought is that this tool appears to download layers sequentially, while the docker-to-squash tool does them in parallel (since it uses docker pull). The next step is converting the layers, which is the internal implementation vs mksquashfs. It's possible that mkquashfs is just faster there. I'll need to do more analysis. And then the last step is the layer upload. I know that this tool is uploading both the sqsh image as well as the tgz file. So that's about double the work, which makes sense why it would take longer.

Anyway, I'm still looking into the performance, but @insideo , feel free to post your insights.

@ericbadger
Copy link
Contributor

ericbadger commented Dec 12, 2020

Here's some additional info. The times aren't exact for the docker-to-squash though. They would actually be smaller than the values listed, because those times include the time it takes to convert the layer to squashfs as well as the time to upload it to hdfs. I performed this test on an internal image, so I can't exactly show it to you or tell you what's in it other than that it's some ML stuff and it's huge. I'll try and find a similar image on dockerhub that I can use

fds Layer Size Java CLI docker-to-squash
Download Time   00:04:39 00:02:50
Layer 1 Conversion 89646470 00:01:49 00:00:35
Layer 2 Conversion 67519574 00:00:34 00:00:23
Layer 3 Conversion 154819384 00:00:39 00:00:24
Layer 4 Conversion 782 00:00:00 00:00:15
Layer 5 Conversion 183 00:00:00 00:00:15
Layer 6 Conversion 240 00:00:00 00:00:14
Layer 7 Conversion 157 00:00:00 00:00:15
Layer 8 Conversion 2277074491 00:09:10 00:02:35
Layer 9 Conversion 80814130 00:00:37 00:00:26
Layer 10 Conversion 673 00:00:00 00:00:15
Layer 11 Conversion 376 00:00:00 00:00:15
Layer 12 Conversion 1091 00:00:00 00:00:15
Layer 13 Conversion 522 00:00:00 00:00:15
Layer 14 Conversion 421395753 00:00:23 00:00:24
Layer 15 Conversion 205884 00:00:00 00:00:15
Layer 16 Conversion 428290011 00:00:21 00:00:24
Layer 17 Conversion 34387 00:00:00 00:00:15
Layer 18 Conversion 9419 00:00:00 00:00:15
Layer 19 Conversion 1628341 00:00:01 00:00:16
Layer 20 Conversion 4098138 00:00:00 00:00:15
Layer 21 Conversion 1214299550 00:09:20 00:02:41
Layer 22 Conversion 3222 00:00:00 00:00:16
Layer 23 Conversion 57882209 00:00:07 00:00:17
Layer 24 Conversion 2247852313 00:07:50 00:02:24
Layer 25 Conversion 20900285 00:00:12 00:00:18
Layer 26 Conversion 168992501 00:00:59 00:00:30
Layer 27 Conversion 3780 00:00:00 00:00:15
Layer 28 Conversion 4885 00:00:00 00:00:16
Layer 29 Conversion 1345108 00:00:00 00:00:15
Layer 30 Conversion 31320991 00:00:02 00:00:16
Layer 31 Conversion 31697060 00:00:02 00:00:16
Layer 32 Conversion 527452 00:00:00 00:00:15

@craigcondit
Copy link
Author

@ericbadger I suspect the performance delta is due to the latest mksquashfs code being multi-threaded during encodes -- the larger images seem to have a higher deltas in your example. We could implement multi-threaded conversion in the Java code as well, but since the engine is designed to be mostly streaming, it would be a pretty big code change. Also, this would make reproducible builds considerably more difficult to ensure.

What we could do is process multiple layers in parallel - this would likely close the gap since most real-world images have several layers which would need conversion, and since each individual layer would still be processed serially, reproducibility would be maintained.

Thoughts?

@ericbadger
Copy link
Contributor

@insideo parallel layer comversion would certainly be helpful. I am sort of worried for some images though. Generally, docker images are made using fewer layers instead of many layers. And in the runc implementation, there's actually a limit of 37 layers because of how we name the mounts as well as the 4kb limit on the arguments to the mount command. So that gives opposing incentives. On one hand, you want more, smaller layers to decrease image conversion time. On the other hand, you want fewer, larger layers to adhere to the layer limit as well as to follow general convention surrounding docker images (e.g. RUN yum install && yum install && yum install && etc.)

Especially anybody who starts building there images using Buildah or starts taking advantage of the new Docker image feature to define your own layer points. They would likely be inclined to make fewer layers instead of more layers.

When you say the tool is streaming, what exactly do you mean? I asked you this before and I thought you said that it would start converting the layers as they came in instead of waiting for them to be fully downloaded. But looking at the log it seems like there is a download stage, a conversion stage, and then an upload stage and those stages are sequential

Also, I just realized that I am using squashfs-tools 4.3, which doesn't have reproducible builds on. So it's a slightly fair comparison since 4.4 slows things down by removing some (all?) of the multithreaded-ness of mksquashfs. I will retest with squashfs-tools 4.4 with reproducible builds enabled

@craigcondit
Copy link
Author

When you say the tool is streaming, what exactly do you mean? I asked you this before and I thought you said that it would start converting the layers as they came in instead of waiting for them to be fully downloaded. But looking at the log it seems like there is a download stage, a conversion stage, and then an upload stage and those stages are sequential

The current implementation of the CLI tool is not stream-oriented, but the underlying squashfs code definitely is. The filesystem tree and content are built up dynamically as the tar.gz file is read. To do otherwise would require unpacking the tar.gz file into a temporary location, which was explicitly avoided in the design to minimize unnecessary I/O and avoid issues of UID/GID/timestamp changes in the process.

Also, I just realized that I am using squashfs-tools 4.3, which doesn't have reproducible builds on. So it's a slightly fair comparison since 4.4 slows things down by removing some (all?) of the multithreaded-ness of mksquashfs. I will retest with squashfs-tools 4.4 with reproducible builds enabled.

This would be an interesting comparison for sure.

@ericbadger
Copy link
Contributor

Ahh makes sense on the streaming. Thanks for the explanation

I cloned https://github.com/plougher/squashfs-tools and ran make to compile mksquashfs. Then I ran my docker-to-squash script with the new mksquashfs. These are the results. The newest run is under docker-to-squash 4.4. It doesn't show all that much difference, except for the first layer for some reason

Layer Size Java CLI docker-to-squash 4.3 docker-to-squash 4.4
  00:04:39 00:02:50 00:03:53
89646470 00:01:49 00:00:35 00:00:36
67519574 00:00:34 00:00:23 00:00:25
154819384 00:00:39 00:00:24 00:00:26
782 00:00:00 00:00:15 00:00:15
183 00:00:00 00:00:15 00:00:16
240 00:00:00 00:00:14 00:00:15
157 00:00:00 00:00:15 00:00:15
2277074491 00:09:10 00:02:35 00:02:36
80814130 00:00:37 00:00:26 00:00:27
673 00:00:00 00:00:15 00:00:15
376 00:00:00 00:00:15 00:00:15
1091 00:00:00 00:00:15 00:00:16
522 00:00:00 00:00:15 00:00:15
421395753 00:00:23 00:00:24 00:00:25
205884 00:00:00 00:00:15 00:00:16
428290011 00:00:21 00:00:24 00:00:25
34387 00:00:00 00:00:15 00:00:15
9419 00:00:00 00:00:15 00:00:16
1628341 00:00:01 00:00:16 00:00:15
4098138 00:00:00 00:00:15 00:00:16
1214299550 00:09:20 00:02:41 00:02:48
3222 00:00:00 00:00:16 00:00:15
57882209 00:00:07 00:00:17 00:00:19
2247852313 00:07:50 00:02:24 00:02:37
20900285 00:00:12 00:00:18 00:00:18
168992501 00:00:59 00:00:30 00:00:30
3780 00:00:00 00:00:15 00:00:15
4885 00:00:00 00:00:16 00:00:15
1345108 00:00:00 00:00:15 00:00:16
31320991 00:00:02 00:00:16 00:00:16
31697060 00:00:02 00:00:16 00:00:16
527452 00:00:00 00:00:15 00:00:15

@eric-badger
Copy link

Hey @insideo , so I'm actively reviewing this but obviously it will be awhile before I get through the whole thing. I do have an initial ask though. When I enable debug logging my terminal gets bombarded with thousands (more?) of logs that look like the text below. It's probably just a single log call that is going berserk because of a big image over http. Could you look into that?

2021-01-12 23:31:25,717 DEBUG [main] http.wire (Wire.java:wire(73)) - http-outgoing-6 << "[0xe8][0xe4][0xb]�[0x91][0xb1][0xb3]fH[0xbb]Z[0x86][0xcd][0xe1]F[0xd0]>[0xad][0x8b][0x8c][0x9a][0xc3]}[0x80][0xda][0xc0]E&[0xcd][0xe1]6@][0xe0]"[0x83][0xe6]p[0x17]hM[0xe0]"s[0xe6]p[0x13][0xa0][0x1e]p[0x91]9s[0xb8][0x7]P[0xb][0xb8][0xc8][0x98]9[0xdc][0x2]Z[0x7][0xb8][0xc8][0x94]9[0xdc][0x1]Z[0x3][0xb8][0xc8][0x8c]9[0xdc][0x0][0xa8][0xfe]_d[0xc4][0x1c][0xae][0xff]T[0xfe]/2b[0xe][0x97]�[0xaa][0xfe]^F[0xcc][0xe1][0xea]O[0xc5][0xdf][0xcb][0x88]9\[0xfc][0xa9][0xf6]{[0x19]1[0x87]k?[0x95]~/#[0xe6]p[0xe9][0xa7][0xca][0xef]u[0xc4]`[0xe5]oo[0xa4][0xef]G[0xcd],[0xfc][0xf4]>[0xf6][0xa3]f[0xd6]}*[0xfb][0xbe][0x1f]5[0xb3][0xec][0xd3][0xa8]~[0xd2][0xcc][0xaa]OE[0xdf][0xf7][0xa3]f[0x16]}[0xaa][0xf9][0xbe][0x1f]5[0xb3][0xe6][0xb7][0x92][0xef][0xfb]I3K~[0xab][0xf8][0xbe][0x9f]4[0xb3][0xe2][0xb7][0x82][0xef][0xfb]I3[0xb]>M[0xc6]~[0xd0][0xcc]zOs[0xb1][0x9f]3[0xb3][0xdc][0xb7][0xa9][0x18]d[0xcc][0xac][0xa5][0x1e]}[0xf1][0xf]2_3[0xac][0xf7]W[0xfe][0xe6][0x1f]d[0xc0]fX[0xf1][0xaf][0xc7]W[0xff] #6[0xc3][0x9a]�=[0xbe][0xfb][0x7][0x19][0xb2][0x19]V[0xfd]+�[0xf9][0xf]2f3[0xac][0xfb][0xd7][0xe3][0xdb]�[0x90]I[0x9b]a[0xe5][0xbf][0x1e]_[0xff][0x83][0xcc][0xda][0xc]k[0xff][0x95][0xbf][0xff][0x7][0x99][0xb6][0x19]V[0xff]+/[0x0][0x82][0x8c][0xdb][0xc][0xeb][0xff][0xf5]X[0x1][0x4][0x19][0xb8][0x19]v[0x80][0xeb][0xb1][0x4][0x8]2s3[0xec][0x1][0xd7]c[\r][0x10]d[0xec]f[0xd8][0x5][0xae][0xc7]" [0xc8][0xe4][0xcd][0xb0][0xf]\[0x8f]U@[0x90][0xe1][0x9b]a'[0xb8][0x1e][0xcb][0x80] [0xf3]7[0xc3]^p=[0xd6][0x1]QG[0x10]u[0x83]+/[0x4]b?[0x8a][0xe6][0xdd]!^[0x9][0xc4]~[0x14][0xcd][0xbb]C[0xc7]R [0xf6][0xa3]h[0xde][0x1d][0xe2][0xb5]@[0xec]'[0xd1][0xbc];t,[0x6]b?[0x8a][0xe6][0xdd][0xa1]c5[0x10][0xfb]Q4[0xef][0xe][0xf1]r [0xf6][0x93]h[0xde][0x1d][0xe2][0xf5]@[0xec]'[0xd1][0xbc];[0xc4][0xb][0x82][0xd8]O[0xa2]yw[0x88]W[0x4][0xb1][0x1f]D[0xf3][0xee][0x10]/[0x9]b?[0x87][0xe6][0xdd]!^[0x13]D[0x19]C[0xf3][0xa3]k[0x97][0xcb][0xf0]-7[0x9a][0x4]}j2z[0xcb][0x8d][0x1e]A-"[0xca][0xe4]-7Z[0x4]u[0x88]][0x6]o[0xb9][0xd1]!Z[0x83][0xd8]e[0xee][0x96][0x1b][\r][0x82][0xfa][0xc3].s[0xb7][0xdc][0xe8][0xf][0xd4][0x1e]v[0x19][0xbb][0xe5]F{h[0xdd]a[0x97][0xa9][nt[0x87][0xd6][0x1c]v[0x99][0xb9][0xe5]Fs[0xa0][0xde][0xb0][0xcb][0xc8]-7z[0x3][0xb5][0x86]]Fn[0xb9][0xd1][0x1a][0xa8]3[0xec]2r[0xcb][0x8d][0xce]@[0x8d]a[0x97][0x91][n4[0x6][0xea][0xb][0xbb][0x8c][0xdc]r[0xa3]/P[[0xd8]e[0xe4][0x96][0x1b]m[0x81][0xba][0xc2][0xae]#[0x87][0xbb][0x2][0xbd][0x9d][0xfd][0xe8][0xd9]M[0x81][0xde][0xcd]~[0xf4][0xec][0x9e]@-a[0xef]G[0xcf]n[0x9][0xcf][0xc3][0xc2][0xd0]O[0x9e][0xdd][0x11][0xde][0xd3][0xb8]~[0xf4][0xec][0x86][0xd0][0xfa]A[0x18][0xfa][0xd1][0xb3][0xfb][0xc1]?[0xb6]a[0xfd][0xe4][0xd9][0xed][0xc0][0xb7]a[0xfd][0xe4][0xd9][0xdd] [0xb7]a[0xfd][0xe4][0xd9][0xcd][0xa0][0xb6]a[0xfd][0xe0][0xd9][0xbd][0xa0][0xb4]a[0xfd][0xdc][0xd9][0xad] [0xb5]a2v[0xc6][[0xff][0xd8][0xae][0x96]a[a#x[0xa4]OKFm[0x85]}[0x80]V[\n]"
2021-01-12 23:31:25,717 DEBUG [main] http.wire (Wire.java:wire(73)) - http-outgoing-6 << "a[0x90]I[a[0x1b][0xa0]uB[0x18]d[0xd0]V[0xd8][0x5][0xda]*![0xc]2g+l[0x2][0xb4]F[0x8][0x83][0xcc][0xd9][\n]"

@craigcondit
Copy link
Author

Hey @insideo , so I'm actively reviewing this but obviously it will be awhile before I get through the whole thing. I do have an initial ask though. When I enable debug logging my terminal gets bombarded with thousands (more?) of logs that look like the text below. It's probably just a single log call that is going berserk because of a big image over http. Could you look into that?

2021-01-12 23:31:25,717 DEBUG [main] http.wire (Wire.java:wire(73)) - [SNIP]"
2021-01-12 23:31:25,717 DEBUG [main] http.wire (Wire.java:wire(73)) - [SNIP]"

I think that's Apache HttpClient (could try disabling org.apache.http.wire debug logging). It shouldn't be coming from this code.

@eric-badger
Copy link

I think that's Apache HttpClient (could try disabling org.apache.http.wire debug logging). It shouldn't be coming from this code.

Looks like you're right. I set org.apache.http.wire to INFO logging and things look way better

@eric-badger
Copy link

eric-badger commented Jan 15, 2021

Still haven't found an issue with the squashfs creation yet, but there is some inherent parsing that needs to happen to convert OCI images into squashfs filesystems that can be read correctly by overlayFS. It looks like whiteout files and opaque directories are not implemented in this PR. The issue is that OCI images handle whiteouts/opaque directories in an annoyingly different way than overlayFS does. OCI uses '.wh.' and '.wh..wh..opq' files while OverlayFS uses a character devices and directory extended attributes.

OCI Standard:
https://github.com/opencontainers/image-spec/blob/master/layer.md#whiteouts

Whiteouts

  • A whiteout file is an empty file with a special filename that signifies a path should be deleted.
  • A whiteout filename consists of the prefix .wh. plus the basename of the path to be deleted.
  • As files prefixed with .wh. are special whiteout markers, it is not possible to create a filesystem which has a file or directory with a name beginning with .wh..
  • Once a whiteout is applied, the whiteout itself MUST also be hidden.
  • Whiteout files MUST only apply to resources in lower/parent layers.
  • Files that are present in the same layer as a whiteout file can only be hidden by whiteout files in subsequent layers.

Opaque Whiteout

  • In addition to expressing that a single entry should be removed from a lower layer, layers may remove all of the children using an opaque whiteout entry.
  • An opaque whiteout entry is a file with the name .wh..wh..opq indicating that all siblings are hidden in the lower layer.

OverlayFS:
https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt

whiteouts and opaque directories

In order to support rm and rmdir without changing the lower
filesystem, an overlay filesystem needs to record in the upper filesystem
that files have been removed. This is done using whiteouts and opaque
directories (non-directories are always opaque).

A whiteout is created as a character device with 0/0 device number.
When a whiteout is found in the upper level of a merged directory, any
matching name in the lower level is ignored, and the whiteout itself
is also hidden.

A directory is made opaque by setting the xattr "trusted.overlay.opaque"
to "y". Where the upper filesystem contains an opaque directory, any
directory in the lower filesystem with the same name is ignored.

There's some discussion here as to why the OCI spec didn't choose to use the overlayFS method of whiteouts. Mostly it appears to be inconsistent behavior in tar that may or may not support the extended attributes and that they didn't want to depend on.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 36s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 4s No case conflicting files found.
+0 🆗 codespell 0m 4s codespell was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 69 new or modified test files.
_ trunk Compile Tests _
+1 💚 mvninstall 34m 12s trunk passed
+1 💚 compile 3m 6s trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04
+1 💚 compile 2m 22s trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08
+1 💚 checkstyle 0m 47s trunk passed
+1 💚 mvnsite 2m 25s trunk passed
+1 💚 javadoc 1m 50s trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04
+1 💚 javadoc 1m 32s trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08
+1 💚 spotbugs 5m 29s trunk passed
+1 💚 shadedclient 18m 36s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 28s Maven dependency ordering for patch
+1 💚 mvninstall 2m 53s the patch passed
+1 💚 compile 3m 4s the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04
+1 💚 javac 3m 4s the patch passed
+1 💚 compile 2m 19s the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08
+1 💚 javac 2m 19s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 0m 40s the patch passed
+1 💚 mvnsite 2m 46s the patch passed
+1 💚 xml 0m 6s The patch has no ill-formed XML file.
+1 💚 javadoc 2m 9s the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04
+1 💚 javadoc 1m 51s the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08
-1 ❌ spotbugs 5m 52s /new-spotbugs-hadoop-tools.html hadoop-tools generated 15 new + 0 unchanged - 0 fixed = 15 total (was 0)
+1 💚 shadedclient 19m 2s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 0m 36s hadoop-runc in the patch passed.
-1 ❌ unit 83m 27s /patch-unit-hadoop-tools.txt hadoop-tools in the patch passed.
+1 💚 asflicense 0m 42s The patch does not generate ASF License warnings.
190m 21s
Reason Tests
SpotBugs module:hadoop-tools
Class org.apache.hadoop.runc.docker.auth.BearerScheme defines non-transient non-serializable instance field client In BearerScheme.java:instance field client In BearerScheme.java
org.apache.hadoop.runc.squashfs.data.DataBlock.getData() may expose internal representation by returning DataBlock.data At DataBlock.java:by returning DataBlock.data At DataBlock.java:[line 28]
new org.apache.hadoop.runc.squashfs.data.DataBlock(byte[], int, int) may expose internal representation by storing an externally mutable object into DataBlock.data At DataBlock.java:expose internal representation by storing an externally mutable object into DataBlock.data At DataBlock.java:[line 44]
org.apache.hadoop.runc.squashfs.directory.DirectoryBuilder$Entry.getName() may expose internal representation by returning DirectoryBuilder$Entry.name At DirectoryBuilder.java:by returning DirectoryBuilder$Entry.name At DirectoryBuilder.java:[line 146]
new org.apache.hadoop.runc.squashfs.directory.DirectoryBuilder$Entry(int, int, short, short, byte[]) may expose internal representation by storing an externally mutable object into DirectoryBuilder$Entry.name At DirectoryBuilder.java:byte[]) may expose internal representation by storing an externally mutable object into DirectoryBuilder$Entry.name At DirectoryBuilder.java:[line 134]
org.apache.hadoop.runc.squashfs.directory.DirectoryEntry.getName() may expose internal representation by returning DirectoryEntry.name At DirectoryEntry.java:by returning DirectoryEntry.name At DirectoryEntry.java:[line 80]
org.apache.hadoop.runc.squashfs.inode.BasicFileINode.getBlockSizes() may expose internal representation by returning BasicFileINode.blockSizes At BasicFileINode.java:by returning BasicFileINode.blockSizes At BasicFileINode.java:[line 187]
org.apache.hadoop.runc.squashfs.inode.BasicFileINode.setBlockSizes(int[]) may expose internal representation by storing an externally mutable object into BasicFileINode.blockSizes At BasicFileINode.java:by storing an externally mutable object into BasicFileINode.blockSizes At BasicFileINode.java:[line 192]
org.apache.hadoop.runc.squashfs.inode.BasicSymlinkINode.getTargetPath() may expose internal representation by returning BasicSymlinkINode.targetPath At BasicSymlinkINode.java:by returning BasicSymlinkINode.targetPath At BasicSymlinkINode.java:[line 68]
org.apache.hadoop.runc.squashfs.inode.ExtendedFileINode.getBlockSizes() may expose internal representation by returning ExtendedFileINode.blockSizes At ExtendedFileINode.java:by returning ExtendedFileINode.blockSizes At ExtendedFileINode.java:[line 85]
org.apache.hadoop.runc.squashfs.inode.ExtendedFileINode.setBlockSizes(int[]) may expose internal representation by storing an externally mutable object into ExtendedFileINode.blockSizes At ExtendedFileINode.java:by storing an externally mutable object into ExtendedFileINode.blockSizes At ExtendedFileINode.java:[line 90]
org.apache.hadoop.runc.squashfs.inode.ExtendedSymlinkINode.getTargetPath() may expose internal representation by returning ExtendedSymlinkINode.targetPath At ExtendedSymlinkINode.java:by returning ExtendedSymlinkINode.targetPath At ExtendedSymlinkINode.java:[line 53]
new org.apache.hadoop.runc.squashfs.metadata.MemoryMetadataBlockReader(int, SuperBlock, byte[], int, int) may expose internal representation by storing an externally mutable object into MemoryMetadataBlockReader.data At MemoryMetadataBlockReader.java:int) may expose internal representation by storing an externally mutable object into MemoryMetadataBlockReader.data At MemoryMetadataBlockReader.java:[line 45]
org.apache.hadoop.runc.squashfs.metadata.MetadataBlock.getData() may expose internal representation by returning MetadataBlock.data At MetadataBlock.java:by returning MetadataBlock.data At MetadataBlock.java:[line 63]
new org.apache.hadoop.runc.squashfs.table.MemoryTableReader(SuperBlock, byte[], int, int) may expose internal representation by storing an externally mutable object into MemoryTableReader.data At MemoryTableReader.java:may expose internal representation by storing an externally mutable object into MemoryTableReader.data At MemoryTableReader.java:[line 41]
Failed junit tests hadoop.tools.fedbalance.TestDistCpProcedure
hadoop.tools.fedbalance.procedure.TestBalanceProcedureScheduler
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/1/artifact/out/Dockerfile
GITHUB PR #2513
Optional Tests dupname asflicense codespell xml compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle
uname Linux 6ea630679121 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 24f004b
Default Java Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/1/testReport/
Max. process+thread count 1096 (vs. ulimit of 5500)
modules C: hadoop-tools/hadoop-runc hadoop-tools U: hadoop-tools
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2513/1/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org

This message was automatically generated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants