Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump hadoop version to 3.3.4 #17002

Merged
merged 1 commit into from
Oct 30, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion core/common/src/main/java/alluxio/conf/PropertyKey.java
Original file line number Diff line number Diff line change
Expand Up @@ -7087,7 +7087,7 @@ public String toString() {
// TODO(ns) Fix default value to handle other UFS types
public static final PropertyKey UNDERFS_VERSION =
stringBuilder(Name.UNDERFS_VERSION)
.setDefaultValue("3.3.1")
.setDefaultValue("3.3.4")
.setIsHidden(true)
.build();

Expand Down
4 changes: 2 additions & 2 deletions dev/scripts/src/alluxio.org/build-distribution/cmd/common.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ var hadoopDistributions = map[string]version{
"hadoop-3.0": parseVersion("3.0.3"),
"hadoop-3.1": parseVersion("3.1.1"),
"hadoop-3.2": parseVersion("3.2.1"),
"hadoop-3.3": parseVersion("3.3.1"),
"hadoop-3.3": parseVersion("3.3.4"),
// This distribution type is built with 2.7.3, but doesn't include the hadoop version in the name.
"default": parseVersion("2.7.3"),
}
Expand Down Expand Up @@ -69,7 +69,7 @@ var ufsModules = map[string]module{
"ufs-hadoop-3.0": {"hadoop-3.0", "hdfs", false, "-pl underfs/hdfs -Pufs-hadoop-3 -Dufs.hadoop.version=3.0.0 -PhdfsActiveSync"},
"ufs-hadoop-3.1": {"hadoop-3.1", "hdfs", false, "-pl underfs/hdfs -Pufs-hadoop-3 -Dufs.hadoop.version=3.1.1 -PhdfsActiveSync"},
"ufs-hadoop-3.2": {"hadoop-3.2", "hdfs", true, "-pl underfs/hdfs -Pufs-hadoop-3 -Dufs.hadoop.version=3.2.1 -PhdfsActiveSync"},
"ufs-hadoop-3.3": {"hadoop-3.3", "hdfs", false, "-pl underfs/hdfs -Pufs-hadoop-3 -Dufs.hadoop.version=3.3.1 -PhdfsActiveSync"},
"ufs-hadoop-3.3": {"hadoop-3.3", "hdfs", false, "-pl underfs/hdfs -Pufs-hadoop-3 -Dufs.hadoop.version=3.3.4 -PhdfsActiveSync"},

"ufs-hadoop-ozone-1.2.1": {"hadoop-ozone-1.2.1", "ozone", true, "-pl underfs/ozone -Pufs-hadoop-3 -Dufs.ozone.version=1.2.1"},
"ufs-hadoop-cosn-3.1.0-5.8.5": {"hadoop-cosn-3.1.0-5.8.5", "cosn", true, "-pl underfs/cosn -Dufs.cosn.version=3.1.0-5.8.5"},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ func buildModules(srcPath, name, moduleFlag, version string, modules map[string]
run(fmt.Sprintf("compiling %v module %v", name, moduleName), "mvn", moduleMvnArgs...)
var srcJar string
if moduleEntry.ufsType == "hdfs" {
var versionMvnArg = "3.3.1"
var versionMvnArg = "3.3.4"
for _, arg := range moduleMvnArgs {
if strings.Contains(arg, "ufs.hadoop.version") {
versionMvnArg = strings.Split(arg, "=")[1]
Expand Down
2 changes: 1 addition & 1 deletion docs/cn/contributor/Building-Alluxio-From-Source.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ Hadoop versions >= 3.0.0 与新版本的Alluxio有最好的兼容性。
$ mvn clean install -pl underfs/hdfs/ \
-Dmaven.javadoc.skip=true -DskipTests -Dlicense.skip=true \
-Dcheckstyle.skip=true -Dfindbugs.skip=true \
-Pufs-hadoop-3 -Dufs.hadoop.version=3.3.1
-Pufs-hadoop-3 -Dufs.hadoop.version=3.3.4
```
要启用`active sync`,请确保使用 `hdfsActiveSync` 属性来构建,
请参考 [Active Sync for HDFS]({{ '/cn/core-services/Unified-Namespace.html' | relativize_url }}#hdfs元数据主动同步) 获得更多关于使用Active Sync的信息。
Expand Down
6 changes: 3 additions & 3 deletions docs/cn/ufs/HDFS.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ priority: 3

要在一组机器上运行一个Alluxio集群,需要在每台机器上部署Alluxio二进制服务端包。你可以[下载带有正确Hadoop版本的预编译二进制包](Running-Alluxio-Locally.html),对于高级用户,也可[源码编译Alluxio](Building-Alluxio-From-Source.html),

注意,在编译源码包的时候,默认的Alluxio二进制包适用于HDFS `3.3.1`,若使用其他版本的Hadoop,需要指定正确的Hadoop版本,并且在Alluxio源码目录下运行如下命令:
注意,在编译源码包的时候,默认的Alluxio二进制包适用于HDFS `3.3.4`,若使用其他版本的Hadoop,需要指定正确的Hadoop版本,并且在Alluxio源码目录下运行如下命令:

```console
$ mvn install -P<YOUR_HADOOP_PROFILE> -D<HADOOP_VERSION> -DskipTests
Expand Down Expand Up @@ -74,9 +74,9 @@ alluxio.master.mount.table.root.ufs=hdfs://nameservice/
Alluxio支持类POSIX文件系统[用户和权限检查]({{ '/cn/security/Security.html' | relativize_url }}),这从v1.3开始默认启用。
为了确保文件/目录的权限信息,即HDFS上的用户,组和访问模式,与Alluxio一致,(例如,在Alluxio中被用户Foo创建的文件在HDFS中也以Foo作为用户持久化),用户**需要**以以下方式启动:

1. [HDFS超级用户](http://hadoop.apache.org/docs/r3.3.1/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#The_Super-User)。即,使用启动HDFS namenode进程的同一用户也启动Alluxio master和worker进程。也就是说,使用与启动HDFS的namenode进程相同的用户名启动Alluxio master和worker进程。
1. [HDFS超级用户](http://hadoop.apache.org/docs/r3.3.4/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#The_Super-User)。即,使用启动HDFS namenode进程的同一用户也启动Alluxio master和worker进程。也就是说,使用与启动HDFS的namenode进程相同的用户名启动Alluxio master和worker进程。

2. [HDFS超级用户组](http://hadoop.apache.org/docs/r3.3.1/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Configuration_Parameters)的成员。编辑HDFS配置文件`hdfs-site.xml`并检查配置属性`dfs.permissions.superusergroup`的值。如果使用组(例如,"hdfs")设置此属性,则将用户添加到此组("hdfs")以启动Alluxio进程(例如,"alluxio");如果未设置此属性,请将一个组添加到此属性,其中Alluxio运行用户是此新添加组的成员。
2. [HDFS超级用户组](http://hadoop.apache.org/docs/r3.3.4/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Configuration_Parameters)的成员。编辑HDFS配置文件`hdfs-site.xml`并检查配置属性`dfs.permissions.superusergroup`的值。如果使用组(例如,"hdfs")设置此属性,则将用户添加到此组("hdfs")以启动Alluxio进程(例如,"alluxio");如果未设置此属性,请将一个组添加到此属性,其中Alluxio运行用户是此新添加组的成员。

注意,上面设置的用户只是启动Alluxio master和worker进程的标识。一旦Alluxio服务器启动,就**不必**使用此用户运行Alluxio客户端应用程序。

Expand Down
4 changes: 2 additions & 2 deletions docs/en/contributor/Building-Alluxio-From-Source.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ For example,
$ mvn clean install -pl underfs/hdfs/ \
-Dmaven.javadoc.skip=true -DskipTests -Dlicense.skip=true \
-Dcheckstyle.skip=true -Dfindbugs.skip=true \
-Pufs-hadoop-3 -Dufs.hadoop.version=3.3.1
-Pufs-hadoop-3 -Dufs.hadoop.version=3.3.4
```

To enable active sync be sure to build using the `hdfsActiveSync` property.
Expand Down Expand Up @@ -173,7 +173,7 @@ All main builds are from Apache so all Apache releases can be used directly
-Pufs-hadoop-2 -Dufs.hadoop.version=2.9.0
-Pufs-hadoop-2 -Dufs.hadoop.version=2.10.0
-Pufs-hadoop-3 -Dufs.hadoop.version=3.0.0
-Pufs-hadoop-3 -Dufs.hadoop.version=3.3.1
-Pufs-hadoop-3 -Dufs.hadoop.version=3.3.4
```

{% endcollapsible %}
Expand Down
2 changes: 1 addition & 1 deletion docs/en/ufs/COSN.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ Specify COS configuration information in order to access COS by modifying `conf/
</property>
```

The above is the most basic configuration. For more configuration please refer to [here](https://hadoop.apache.org/docs/r3.3.1/hadoop-cos/cloud-storage/index.html).
The above is the most basic configuration. For more configuration please refer to [here](https://hadoop.apache.org/docs/r3.3.4/hadoop-cos/cloud-storage/index.html).
After these changes, Alluxio should be configured to work with COSN as its under storage system and you can try [Running Alluxio Locally with COSN](#running-alluxio-locally-with-cosn).

### Nested Mount
Expand Down
8 changes: 4 additions & 4 deletions docs/en/ufs/HDFS.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ with the correct Hadoop version (recommended), or
(for advanced users).

Note that, when building Alluxio from source code, by default Alluxio server binaries are built to
work with Apache Hadoop HDFS of version `3.3.1`. To work with Hadoop distributions of other
work with Apache Hadoop HDFS of version `3.3.4`. To work with Hadoop distributions of other
versions, one needs to specify the correct Hadoop profile and run the following in your Alluxio
directory:

Expand Down Expand Up @@ -159,7 +159,7 @@ alluxio.master.mount.table.root.option.alluxio.underfs.hdfs.configuration=/path/
To configure Alluxio to work with HDFS namenodes in HA mode, first configure Alluxio servers to [access HDFS with the proper configuration files](#specify-hdfs-configuration-location).

In addition, set the under storage address to `hdfs://nameservice/` (`nameservice` is
the [HDFS nameservice](https://hadoop.apache.org/docs/r3.3.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html#Configuration_details)
the [HDFS nameservice](https://hadoop.apache.org/docs/r3.3.4/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html#Configuration_details)
already configured in `hdfs-site.xml`). To mount an HDFS subdirectory to Alluxio instead
of the whole HDFS namespace, change the under storage address to something like
`hdfs://nameservice/alluxio/data`.
Expand All @@ -176,11 +176,11 @@ HDFS is consistent with Alluxio (e.g., a file created by user Foo in Alluxio is
HDFS also with owner as user Foo), the user to start Alluxio master and worker processes
**is required** to be either:

1. [HDFS super user](http://hadoop.apache.org/docs/r3.3.1/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#The_Super-User).
1. [HDFS super user](http://hadoop.apache.org/docs/r3.3.4/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#The_Super-User).
Namely, use the same user that starts HDFS namenode process to also start Alluxio master and
worker processes.

2. A member of [HDFS superuser group](http://hadoop.apache.org/docs/r3.3.1/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Configuration_Parameters).
2. A member of [HDFS superuser group](http://hadoop.apache.org/docs/r3.3.4/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#Configuration_Parameters).
Edit HDFS configuration file `hdfs-site.xml` and check the value of configuration property
`dfs.permissions.superusergroup`. If this property is set with a group (e.g., "hdfs"), add the
user to start Alluxio process (e.g., "alluxio") to this group ("hdfs"); if this property is not
Expand Down
2 changes: 1 addition & 1 deletion integration/tools/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
<!-- The following paths need to be defined here as well as in the parent pom so that mvn can -->
<!-- run properly from sub-project directories -->
<build.path>${project.parent.parent.basedir}/build</build.path>
<ufs.hadoop.version>3.3.1</ufs.hadoop.version>
<ufs.hadoop.version>3.3.4</ufs.hadoop.version>
<failIfNoTests>false</failIfNoTests>
</properties>

Expand Down
6 changes: 3 additions & 3 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@
<gson.version>2.8.9</gson.version>
<netty.version>4.1.52.Final</netty.version>
<rocksdb.version>7.0.3</rocksdb.version>
<hadoop.version>3.3.1</hadoop.version>
<hadoop.version>3.3.4</hadoop.version>
<jacoco.version>0.8.5</jacoco.version>
<java.version>1.8</java.version>
<jaxb.version>2.3.3</jaxb.version>
Expand Down Expand Up @@ -1388,7 +1388,7 @@
<profile>
<id>hadoop-3</id>
<properties>
<hadoop.version>3.3.1</hadoop.version>
<hadoop.version>3.3.4</hadoop.version>
</properties>
</profile>

Expand Down Expand Up @@ -1436,7 +1436,7 @@
<modules>
</modules>
<properties>
<hadoop.version>3.3.1</hadoop.version>
<hadoop.version>3.3.4</hadoop.version>
</properties>
</profile>

Expand Down
2 changes: 1 addition & 1 deletion shaded/hadoop/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
<!-- The build path need to be defined here as well as in the parent pom so that mvn can -->
<!-- run properly from sub-project directories -->
<build.path>${project.parent.parent.basedir}/build</build.path>
<ufs.hadoop.version>3.3.1</ufs.hadoop.version>
<ufs.hadoop.version>3.3.4</ufs.hadoop.version>
<shading.prefix>alluxio.shaded.hdfs</shading.prefix>
<!-- Composite jar name with both hadoop version and project version included -->
<lib.jar.name>${project.artifactId}-${ufs.hadoop.version}-${project.version}.jar</lib.jar.name>
Expand Down
2 changes: 1 addition & 1 deletion underfs/abfs/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
<!-- The following paths need to be defined here as well as in the parent pom so that mvn can -->
<!-- run properly from sub-project directories -->
<build.path>${project.parent.parent.basedir}/build</build.path>
<ufs.hadoop.version>3.3.1</ufs.hadoop.version>
<ufs.hadoop.version>3.3.4</ufs.hadoop.version>
</properties>

<dependencies>
Expand Down
2 changes: 1 addition & 1 deletion underfs/adl/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
<!-- The following paths need to be defined here as well as in the parent pom so that mvn can -->
<!-- run properly from sub-project directories -->
<build.path>${project.parent.parent.basedir}/build</build.path>
<ufs.hadoop.version>3.3.1</ufs.hadoop.version>
<ufs.hadoop.version>3.3.4</ufs.hadoop.version>
</properties>

<dependencies>
Expand Down
2 changes: 1 addition & 1 deletion underfs/hdfs/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
<!-- run properly from sub-project directories -->
<build.path>${project.parent.parent.basedir}/build</build.path>
<!-- Decouple hadoop versions between north and south bound -->
<ufs.hadoop.version>3.3.1</ufs.hadoop.version>
<ufs.hadoop.version>3.3.4</ufs.hadoop.version>
<lib.jar.name>${project.artifactId}-${ufs.hadoop.version}-${project.version}.jar</lib.jar.name>
</properties>

Expand Down
2 changes: 1 addition & 1 deletion underfs/wasb/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
<!-- The following paths need to be defined here as well as in the parent pom so that mvn can -->
<!-- run properly from sub-project directories -->
<build.path>${project.parent.parent.basedir}/build</build.path>
<ufs.hadoop.version>3.3.1</ufs.hadoop.version>
<ufs.hadoop.version>3.3.4</ufs.hadoop.version>
</properties>

<dependencies>
Expand Down