diff --git a/.serena/project.yml b/.serena/project.yml index d4fd762ad4..16ccb11693 100644 --- a/.serena/project.yml +++ b/.serena/project.yml @@ -103,23 +103,3 @@ default_modes: # fixed set of tools to use as the base tool set (if non-empty), replacing Serena's default set of tools. # This cannot be combined with non-empty excluded_tools or included_optional_tools. fixed_tools: [] - -# override of the corresponding setting in serena_config.yml, see the documentation there. -# If null or missing, the value from the global config is used. -symbol_info_budget: - -# The language backend to use for this project. -# If not set, the global setting from serena_config.yml is used. -# Valid values: LSP, JetBrains -# Note: the backend is fixed at startup. If a project with a different backend -# is activated post-init, an error will be returned. -language_backend: - -# list of regex patterns which, when matched, mark a memory entry as read‑only. -# Extends the list from the global configuration, merging the two lists. -read_only_memory_patterns: [] - -# line ending convention to use when writing source files. -# Possible values: unset (use global setting), "lf", "crlf", or "native" (platform default) -# This does not affect Serena's own files (e.g. memories and configuration files), which always use native line endings. -line_ending: diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 7056afd978..f06c8beafa 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -19,13 +19,13 @@ Before submitting the code, we need to do some preparation: 1. Sign up or login to GitHub: [https://github.com](https://github.com) -2. Fork HugeGraph repo from GitHub: [https://github.com/apache/hugegraph/fork](https://github.com/apache/hugegraph/fork) +2. Fork HugeGraph repo from GitHub: [https://github.com/apache/incubator-hugegraph/fork](https://github.com/apache/incubator-hugegraph/fork) -3. Clone code from fork repo to local: [https://github.com/${GITHUB_USER_NAME}/hugegraph](https://github.com/${GITHUB_USER_NAME}/hugegraph) +3. Clone code from fork repo to local: [https://github.com/${GITHUB_USER_NAME}/incubator-hugegraph](https://github.com/${GITHUB_USER_NAME}/incubator-hugegraph) ```shell # clone code from remote to local repo - git clone https://github.com/${GITHUB_USER_NAME}/hugegraph.git hugegraph + git clone https://github.com/${GITHUB_USER_NAME}/incubator-hugegraph.git hugegraph ``` 4. Configure local HugeGraph repo @@ -34,7 +34,7 @@ Before submitting the code, we need to do some preparation: cd hugegraph # add upstream to synchronize the latest code - git remote add hugegraph https://github.com/apache/hugegraph + git remote add hugegraph https://github.com/apache/incubator-hugegraph # set name and email to push code to github git config user.name "{full-name}" # like "Jermy Li" @@ -43,7 +43,7 @@ Before submitting the code, we need to do some preparation: ## 2. Create an Issue on GitHub -If you encounter bugs or have any questions, please go to [GitHub Issues](https://github.com/apache/hugegraph/issues) to report them and feel free to [create an issue](https://github.com/apache/hugegraph/issues/new). +If you encounter bugs or have any questions, please go to [GitHub Issues](https://github.com/apache/incubator-hugegraph/issues) to report them and feel free to [create an issue](https://github.com/apache/incubator-hugegraph/issues/new). ## 3. Make changes of code locally @@ -75,10 +75,10 @@ Note: Code style is defined by the `.editorconfig` file at the repository root. ##### 3.2.1 Check licenses If we want to add new third-party dependencies to the `HugeGraph` project, we need to do the following things: -1. Find the third-party dependent repository, put the dependent `license` file into [./install-dist/release-docs/licenses/](https://github.com/apache/hugegraph/tree/master/install-dist/release-docs/licenses) path. -2. Declare the dependency in [./install-dist/release-docs/LICENSE](https://github.com/apache/hugegraph/blob/master/install-dist/release-docs/LICENSE) `LICENSE` information. -3. Find the NOTICE file in the repository and append it to [./install-dist/release-docs/NOTICE](https://github.com/apache/hugegraph/blob/master/install-dist/release-docs/NOTICE) file (skip this step if there is no NOTICE file). -4. Execute locally [./install-dist/scripts/dependency/regenerate_known_dependencies.sh](https://github.com/apache/hugegraph/blob/master/install-dist/scripts/dependency/regenerate_known_dependencies.sh) to update the dependency list [known-dependencies.txt](https://github.com/apache/hugegraph/blob/master/install-dist/scripts/dependency/known-dependencies.txt) (or manually update). +1. Find the third-party dependent repository, put the dependent `license` file into [./hugegraph-dist/release-docs/licenses/](https://github.com/apache/incubator-hugegraph/tree/master/hugegraph-dist/release-docs/licenses) path. +2. Declare the dependency in [./install-dist/release-docs/LICENSE](https://github.com/apache/incubator-hugegraph/blob/master/install-dist/release-docs/LICENSE) `LICENSE` information. +3. Find the NOTICE file in the repository and append it to [./install-dist/release-docs/NOTICE](https://github.com/apache/incubator-hugegraph/blob/master/install-dist/release-docs/NOTICE) file (skip this step if there is no NOTICE file). +4. Execute locally [./install-dist/scripts/dependency/regenerate_known_dependencies.sh](https://github.com/apache/incubator-hugegraph/blob/master/install-dist/scripts/dependency/regenerate_known_dependencies.sh) to update the dependency list [known-dependencies.txt](https://github.com/apache/incubator-hugegraph/blob/master/install-dist/scripts/dependency/known-dependencies.txt) (or manually update). **Example**: A new third-party dependency is introduced into the project -> `ant-1.9.1.jar` - The project source code is located at: https://github.com/apache/ant/tree/rel/1.9.1 diff --git a/DISCLAIMER b/DISCLAIMER new file mode 100644 index 0000000000..be718eef3b --- /dev/null +++ b/DISCLAIMER @@ -0,0 +1,7 @@ +Apache HugeGraph (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator PMC. + +Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, +and decision making process have stabilized in a manner consistent with other successful ASF projects. + +While incubation status is not necessarily a reflection of the completeness or stability of the code, +it does indicate that the project has yet to be fully endorsed by the ASF. diff --git a/NOTICE b/NOTICE index 8e48b813b8..aa6764af84 100644 --- a/NOTICE +++ b/NOTICE @@ -1,5 +1,5 @@ -Apache HugeGraph -Copyright 2022-2026 The Apache Software Foundation +Apache HugeGraph(incubating) +Copyright 2022-2025 The Apache Software Foundation This product includes software developed at The Apache Software Foundation (http://www.apache.org/). diff --git a/README.md b/README.md index eba5d980ee..c027cda43f 100644 --- a/README.md +++ b/README.md @@ -7,8 +7,8 @@
[![License](https://img.shields.io/badge/license-Apache%202-0E78BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html) -[![HugeGraph-CI](https://github.com/apache/hugegraph/actions/workflows/ci.yml/badge.svg)](https://github.com/apache/hugegraph/actions/workflows/ci.yml) -[![License checker](https://github.com/apache/hugegraph/actions/workflows/licence-checker.yml/badge.svg)](https://github.com/apache/hugegraph/actions/workflows/licence-checker.yml) +[![HugeGraph-CI](https://github.com/apache/incubator-hugegraph/actions/workflows/ci.yml/badge.svg)](https://github.com/apache/incubator-hugegraph/actions/workflows/ci.yml) +[![License checker](https://github.com/apache/incubator-hugegraph/actions/workflows/licence-checker.yml/badge.svg)](https://github.com/apache/incubator-hugegraph/actions/workflows/licence-checker.yml) [![GitHub Releases Downloads](https://img.shields.io/github/downloads/apache/hugegraph/total.svg)](https://github.com/apache/hugegraph/releases) [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/apache/hugegraph) @@ -48,7 +48,7 @@ Complete **HugeGraph** ecosystem components: 2. **[hugegraph-computer](https://github.com/apache/hugegraph-computer)** - Integrated **graph computing** system -3. **[hugegraph-ai](https://github.com/apache/hugegraph-ai)** - **Graph AI/LLM/Knowledge Graph** integration +3. **[hugegraph-ai](https://github.com/apache/incubator-hugegraph-ai)** - **Graph AI/LLM/Knowledge Graph** integration 4. **[hugegraph-website](https://github.com/apache/hugegraph-doc)** - **Documentation & website** repository @@ -223,17 +223,9 @@ Download pre-built packages from the [Download Page](https://hugegraph.apache.or ```bash # Download and extract -# For historical 1.7.0 and earlier releases, use the archive URL and -# set PACKAGE=apache-hugegraph-incubating-{version} instead. -BASE_URL="https://downloads.apache.org/hugegraph/{version}" -PACKAGE="apache-hugegraph-{version}" -# Historical alternative: -# BASE_URL="https://archive.apache.org/dist/incubator/hugegraph/{version}" -# PACKAGE="apache-hugegraph-incubating-{version}" - -wget ${BASE_URL}/${PACKAGE}.tar.gz -tar -xzf ${PACKAGE}.tar.gz -cd ${PACKAGE} +wget https://downloads.apache.org/incubator/hugegraph/{version}/apache-hugegraph-incubating-{version}.tar.gz +tar -xzf apache-hugegraph-incubating-{version}.tar.gz +cd apache-hugegraph-incubating-{version} # Initialize backend storage bin/init-store.sh @@ -371,7 +363,7 @@ Welcome to contribute to HugeGraph! Thank you to all the contributors who have helped make HugeGraph better! -[![contributors graph](https://contrib.rocks/image?repo=apache/hugegraph)](https://github.com/apache/hugegraph/graphs/contributors) +[![contributors graph](https://contrib.rocks/image?repo=apache/hugegraph)](https://github.com/apache/incubator-hugegraph/graphs/contributors) ## License diff --git a/docker/configs/application-pd0.yml b/docker/configs/application-pd0.yml new file mode 100644 index 0000000000..6531cbafb2 --- /dev/null +++ b/docker/configs/application-pd0.yml @@ -0,0 +1,63 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +spring: + application: + name: hugegraph-pd + +management: + metrics: + export: + prometheus: + enabled: true + endpoints: + web: + exposure: + include: "*" + +logging: + config: 'file:./conf/log4j2.xml' +license: + verify-path: ./conf/verify-license.json + license-path: ./conf/hugegraph.license +grpc: + port: 8686 + host: 127.0.0.1 + +server: + port: 8620 + +pd: + data-path: ./pd_data + patrol-interval: 1800 + initial-store-count: 3 + initial-store-list: 127.0.0.1:8500,127.0.0.1:8501,127.0.0.1:8502 + +raft: + address: 127.0.0.1:8610 + peers-list: 127.0.0.1:8610,127.0.0.1:8611,127.0.0.1:8612 + +store: + max-down-time: 172800 + monitor_data_enabled: true + monitor_data_interval: 1 minute + monitor_data_retention: 1 day + initial-store-count: 1 + +partition: + default-shard-count: 1 + store-max-shard-count: 12 diff --git a/docker/configs/application-pd1.yml b/docker/configs/application-pd1.yml new file mode 100644 index 0000000000..0cf9f54297 --- /dev/null +++ b/docker/configs/application-pd1.yml @@ -0,0 +1,63 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +spring: + application: + name: hugegraph-pd + +management: + metrics: + export: + prometheus: + enabled: true + endpoints: + web: + exposure: + include: "*" + +logging: + config: 'file:./conf/log4j2.xml' +license: + verify-path: ./conf/verify-license.json + license-path: ./conf/hugegraph.license +grpc: + port: 8687 + host: 127.0.0.1 + +server: + port: 8621 + +pd: + data-path: ./pd_data + patrol-interval: 1800 + initial-store-count: 3 + initial-store-list: 127.0.0.1:8500,127.0.0.1:8501,127.0.0.1:8502 + +raft: + address: 127.0.0.1:8611 + peers-list: 127.0.0.1:8610,127.0.0.1:8611,127.0.0.1:8612 + +store: + max-down-time: 172800 + monitor_data_enabled: true + monitor_data_interval: 1 minute + monitor_data_retention: 1 day + initial-store-count: 1 + +partition: + default-shard-count: 1 + store-max-shard-count: 12 diff --git a/docker/configs/application-pd2.yml b/docker/configs/application-pd2.yml new file mode 100644 index 0000000000..a0d2c79ea3 --- /dev/null +++ b/docker/configs/application-pd2.yml @@ -0,0 +1,63 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +spring: + application: + name: hugegraph-pd + +management: + metrics: + export: + prometheus: + enabled: true + endpoints: + web: + exposure: + include: "*" + +logging: + config: 'file:./conf/log4j2.xml' +license: + verify-path: ./conf/verify-license.json + license-path: ./conf/hugegraph.license +grpc: + port: 8688 + host: 127.0.0.1 + +server: + port: 8622 + +pd: + data-path: ./pd_data + patrol-interval: 1800 + initial-store-count: 3 + initial-store-list: 127.0.0.1:8500,127.0.0.1:8501,127.0.0.1:8502 + +raft: + address: 127.0.0.1:8612 + peers-list: 127.0.0.1:8610,127.0.0.1:8611,127.0.0.1:8612 + +store: + max-down-time: 172800 + monitor_data_enabled: true + monitor_data_interval: 1 minute + monitor_data_retention: 1 day + initial-store-count: 1 + +partition: + default-shard-count: 1 + store-max-shard-count: 12 diff --git a/docker/configs/application-store0.yml b/docker/configs/application-store0.yml new file mode 100644 index 0000000000..d093f1bfbd --- /dev/null +++ b/docker/configs/application-store0.yml @@ -0,0 +1,57 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +pdserver: + address: 127.0.0.1:8686,127.0.0.1:8687,127.0.0.1:8688 + +management: + metrics: + export: + prometheus: + enabled: true + endpoints: + web: + exposure: + include: "*" + +grpc: + host: 127.0.0.1 + port: 8500 + netty-server: + max-inbound-message-size: 1000MB +raft: + disruptorBufferSize: 1024 + address: 127.0.0.1:8510 + max-log-file-size: 600000000000 + snapshotInterval: 1800 +server: + port: 8520 + +app: + data-path: ./storage + +spring: + application: + name: store-node-grpc-server + profiles: + active: default + include: pd + +logging: + config: 'file:./conf/log4j2.xml' + level: + root: info diff --git a/docker/configs/application-store1.yml b/docker/configs/application-store1.yml new file mode 100644 index 0000000000..0aeba62cf6 --- /dev/null +++ b/docker/configs/application-store1.yml @@ -0,0 +1,57 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +pdserver: + address: 127.0.0.1:8686,127.0.0.1:8687,127.0.0.1:8688 + +management: + metrics: + export: + prometheus: + enabled: true + endpoints: + web: + exposure: + include: "*" + +grpc: + host: 127.0.0.1 + port: 8501 + netty-server: + max-inbound-message-size: 1000MB +raft: + disruptorBufferSize: 1024 + address: 127.0.0.1:8511 + max-log-file-size: 600000000000 + snapshotInterval: 1800 +server: + port: 8521 + +app: + data-path: ./storage + +spring: + application: + name: store-node-grpc-server + profiles: + active: default + include: pd + +logging: + config: 'file:./conf/log4j2.xml' + level: + root: info diff --git a/docker/configs/application-store2.yml b/docker/configs/application-store2.yml new file mode 100644 index 0000000000..e18dc62a3c --- /dev/null +++ b/docker/configs/application-store2.yml @@ -0,0 +1,57 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +pdserver: + address: 127.0.0.1:8686,127.0.0.1:8687,127.0.0.1:8688 + +management: + metrics: + export: + prometheus: + enabled: true + endpoints: + web: + exposure: + include: "*" + +grpc: + host: 127.0.0.1 + port: 8502 + netty-server: + max-inbound-message-size: 1000MB +raft: + disruptorBufferSize: 1024 + address: 127.0.0.1:8512 + max-log-file-size: 600000000000 + snapshotInterval: 1800 +server: + port: 8522 + +app: + data-path: ./storage + +spring: + application: + name: store-node-grpc-server + profiles: + active: default + include: pd + +logging: + config: 'file:./conf/log4j2.xml' + level: + root: info diff --git a/hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-partition.sh b/docker/configs/server1-conf/gremlin-driver-settings.yaml old mode 100755 new mode 100644 similarity index 62% rename from hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-partition.sh rename to docker/configs/server1-conf/gremlin-driver-settings.yaml index d14bc90244..2f60ff8379 --- a/hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-partition.sh +++ b/docker/configs/server1-conf/gremlin-driver-settings.yaml @@ -1,4 +1,3 @@ -#!/bin/bash # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with @@ -7,7 +6,7 @@ # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # -# http://www.apache.org/licenses/LICENSE-2.0 +# http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, @@ -15,20 +14,12 @@ # See the License for the specific language governing permissions and # limitations under the License. # -set -euo pipefail - -: "${STORE_REST:?STORE_REST not set}" - -timeout "${WAIT_PARTITION_TIMEOUT_S:-120}s" bash -c ' -until curl -fsS "http://${STORE_REST}" 2>/dev/null | \ - grep -q "\"partitionCount\":[1-9]" -do - echo "Waiting for partition assignment..." - sleep 5 -done -' - -echo "Partitions detected:" -URL="http://${STORE_REST}/v1/partitions" -echo "$URL" -curl -v "$URL" +hosts: [localhost] +port: 8181 +serializer: { + className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } +} diff --git a/docker/configs/server1-conf/gremlin-server.yaml b/docker/configs/server1-conf/gremlin-server.yaml new file mode 100644 index 0000000000..df73386b26 --- /dev/null +++ b/docker/configs/server1-conf/gremlin-server.yaml @@ -0,0 +1,127 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# host and port of gremlin server, need to be consistent with host and port in rest-server.properties +host: 127.0.0.1 +port: 8181 + +# timeout in ms of gremlin query +evaluationTimeout: 30000 + +channelizer: org.apache.tinkerpop.gremlin.server.channel.WsAndHttpChannelizer +# don't set graph at here, this happens after support for dynamically adding graph +graphs: { +} +scriptEngines: { + gremlin-groovy: { + staticImports: [ + org.opencypher.gremlin.process.traversal.CustomPredicates.*', + org.opencypher.gremlin.traversal.CustomFunctions.* + ], + plugins: { + org.apache.hugegraph.plugin.HugeGraphGremlinPlugin: {}, + org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {}, + org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: { + classImports: [ + java.lang.Math, + org.apache.hugegraph.backend.id.IdGenerator, + org.apache.hugegraph.type.define.Directions, + org.apache.hugegraph.type.define.NodeRole, + org.apache.hugegraph.masterelection.GlobalMasterInfo, + org.apache.hugegraph.util.DateUtil, + org.apache.hugegraph.traversal.algorithm.CollectionPathsTraverser, + org.apache.hugegraph.traversal.algorithm.CountTraverser, + org.apache.hugegraph.traversal.algorithm.CustomizedCrosspointsTraverser, + org.apache.hugegraph.traversal.algorithm.CustomizePathsTraverser, + org.apache.hugegraph.traversal.algorithm.FusiformSimilarityTraverser, + org.apache.hugegraph.traversal.algorithm.HugeTraverser, + org.apache.hugegraph.traversal.algorithm.JaccardSimilarTraverser, + org.apache.hugegraph.traversal.algorithm.KneighborTraverser, + org.apache.hugegraph.traversal.algorithm.KoutTraverser, + org.apache.hugegraph.traversal.algorithm.MultiNodeShortestPathTraverser, + org.apache.hugegraph.traversal.algorithm.NeighborRankTraverser, + org.apache.hugegraph.traversal.algorithm.PathsTraverser, + org.apache.hugegraph.traversal.algorithm.PersonalRankTraverser, + org.apache.hugegraph.traversal.algorithm.SameNeighborTraverser, + org.apache.hugegraph.traversal.algorithm.ShortestPathTraverser, + org.apache.hugegraph.traversal.algorithm.SingleSourceShortestPathTraverser, + org.apache.hugegraph.traversal.algorithm.SubGraphTraverser, + org.apache.hugegraph.traversal.algorithm.TemplatePathsTraverser, + org.apache.hugegraph.traversal.algorithm.steps.EdgeStep, + org.apache.hugegraph.traversal.algorithm.steps.RepeatEdgeStep, + org.apache.hugegraph.traversal.algorithm.steps.WeightedEdgeStep, + org.apache.hugegraph.traversal.optimize.ConditionP, + org.apache.hugegraph.traversal.optimize.Text, + org.apache.hugegraph.traversal.optimize.TraversalUtil, + org.opencypher.gremlin.traversal.CustomFunctions, + org.opencypher.gremlin.traversal.CustomPredicate + ], + methodImports: [ + java.lang.Math#*, + org.opencypher.gremlin.traversal.CustomPredicate#*, + org.opencypher.gremlin.traversal.CustomFunctions#* + ] + }, + org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: { + files: [scripts/empty-sample.groovy] + } + } + } +} +serializers: + - {className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } + } + - {className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } + } + - {className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV2d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } + } + - {className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } + } +metrics: { + consoleReporter: {enabled: false, interval: 180000}, + csvReporter: {enabled: false, interval: 180000, fileName: ./metrics/gremlin-server-metrics.csv}, + jmxReporter: {enabled: false}, + slf4jReporter: {enabled: false, interval: 180000}, + gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST}, + graphiteReporter: {enabled: false, interval: 180000} +} +maxInitialLineLength: 4096 +maxHeaderSize: 8192 +maxChunkSize: 8192 +maxContentLength: 65536 +maxAccumulationBufferComponents: 1024 +resultIterationBatchSize: 64 +writeBufferLowWaterMark: 32768 +writeBufferHighWaterMark: 65536 +ssl: { + enabled: false +} diff --git a/docker/configs/server1-conf/log4j2.xml b/docker/configs/server1-conf/log4j2.xml new file mode 100644 index 0000000000..f1dd7e8395 --- /dev/null +++ b/docker/configs/server1-conf/log4j2.xml @@ -0,0 +1,144 @@ + + + + + + logs + hugegraph-server + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docker/configs/server1-conf/remote-objects.yaml b/docker/configs/server1-conf/remote-objects.yaml new file mode 100644 index 0000000000..94ebc99190 --- /dev/null +++ b/docker/configs/server1-conf/remote-objects.yaml @@ -0,0 +1,30 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +hosts: [localhost] +port: 8181 +serializer: { + className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, + config: { + serializeResultToString: false, + # The duplication of HugeGraphIoRegistry is meant to fix a bug in the + # 'org.apache.tinkerpop.gremlin.driver.Settings:from(Configuration)' method. + ioRegistries: [ + org.apache.hugegraph.io.HugeGraphIoRegistry, + org.apache.hugegraph.io.HugeGraphIoRegistry + ] + } +} diff --git a/docker/configs/server1-conf/remote.yaml b/docker/configs/server1-conf/remote.yaml new file mode 100644 index 0000000000..2f60ff8379 --- /dev/null +++ b/docker/configs/server1-conf/remote.yaml @@ -0,0 +1,25 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +hosts: [localhost] +port: 8181 +serializer: { + className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } +} diff --git a/docker/configs/server1-conf/rest-server.properties b/docker/configs/server1-conf/rest-server.properties new file mode 100644 index 0000000000..fce537bb1c --- /dev/null +++ b/docker/configs/server1-conf/rest-server.properties @@ -0,0 +1,29 @@ +# bind url +restserver.url=127.0.0.1:8081 +# gremlin server url, need to be consistent with host and port in gremlin-server.yaml +gremlinserver.url=127.0.0.1:8181 + +graphs=./conf/graphs + +# configuration of arthas +arthas.telnet_port=8562 +arthas.http_port=8561 +arthas.ip=127.0.0.1 +arthas.disabled_commands=jad + +# authentication configs +# choose 'org.apache.hugegraph.auth.StandardAuthenticator' or a custom implementation +#auth.authenticator= +# for admin password, By default, it is pa and takes effect upon the first startup +#auth.admin_pa=pa + +# rpc server configs for multi graph-servers or raft-servers +rpc.server_host=127.0.0.1 +rpc.server_port=8091 + +# lightweight load balancing (beta) +server.id=server-1 +server.role=master + +# slow query log +log.slow_query_threshold=1000 diff --git a/docker/configs/server2-conf/gremlin-driver-settings.yaml b/docker/configs/server2-conf/gremlin-driver-settings.yaml new file mode 100644 index 0000000000..55f38ab97d --- /dev/null +++ b/docker/configs/server2-conf/gremlin-driver-settings.yaml @@ -0,0 +1,25 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +hosts: [localhost] +port: 8182 +serializer: { + className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } +} diff --git a/docker/configs/server2-conf/gremlin-server.yaml b/docker/configs/server2-conf/gremlin-server.yaml new file mode 100644 index 0000000000..048dded559 --- /dev/null +++ b/docker/configs/server2-conf/gremlin-server.yaml @@ -0,0 +1,127 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# host and port of gremlin server, need to be consistent with host and port in rest-server.properties +host: 127.0.0.1 +port: 8182 + +# timeout in ms of gremlin query +evaluationTimeout: 30000 + +channelizer: org.apache.tinkerpop.gremlin.server.channel.WsAndHttpChannelizer +# don't set graph at here, this happens after support for dynamically adding graph +graphs: { +} +scriptEngines: { + gremlin-groovy: { + staticImports: [ + org.opencypher.gremlin.process.traversal.CustomPredicates.*', + org.opencypher.gremlin.traversal.CustomFunctions.* + ], + plugins: { + org.apache.hugegraph.plugin.HugeGraphGremlinPlugin: {}, + org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {}, + org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: { + classImports: [ + java.lang.Math, + org.apache.hugegraph.backend.id.IdGenerator, + org.apache.hugegraph.type.define.Directions, + org.apache.hugegraph.type.define.NodeRole, + org.apache.hugegraph.masterelection.GlobalMasterInfo, + org.apache.hugegraph.util.DateUtil, + org.apache.hugegraph.traversal.algorithm.CollectionPathsTraverser, + org.apache.hugegraph.traversal.algorithm.CountTraverser, + org.apache.hugegraph.traversal.algorithm.CustomizedCrosspointsTraverser, + org.apache.hugegraph.traversal.algorithm.CustomizePathsTraverser, + org.apache.hugegraph.traversal.algorithm.FusiformSimilarityTraverser, + org.apache.hugegraph.traversal.algorithm.HugeTraverser, + org.apache.hugegraph.traversal.algorithm.JaccardSimilarTraverser, + org.apache.hugegraph.traversal.algorithm.KneighborTraverser, + org.apache.hugegraph.traversal.algorithm.KoutTraverser, + org.apache.hugegraph.traversal.algorithm.MultiNodeShortestPathTraverser, + org.apache.hugegraph.traversal.algorithm.NeighborRankTraverser, + org.apache.hugegraph.traversal.algorithm.PathsTraverser, + org.apache.hugegraph.traversal.algorithm.PersonalRankTraverser, + org.apache.hugegraph.traversal.algorithm.SameNeighborTraverser, + org.apache.hugegraph.traversal.algorithm.ShortestPathTraverser, + org.apache.hugegraph.traversal.algorithm.SingleSourceShortestPathTraverser, + org.apache.hugegraph.traversal.algorithm.SubGraphTraverser, + org.apache.hugegraph.traversal.algorithm.TemplatePathsTraverser, + org.apache.hugegraph.traversal.algorithm.steps.EdgeStep, + org.apache.hugegraph.traversal.algorithm.steps.RepeatEdgeStep, + org.apache.hugegraph.traversal.algorithm.steps.WeightedEdgeStep, + org.apache.hugegraph.traversal.optimize.ConditionP, + org.apache.hugegraph.traversal.optimize.Text, + org.apache.hugegraph.traversal.optimize.TraversalUtil, + org.opencypher.gremlin.traversal.CustomFunctions, + org.opencypher.gremlin.traversal.CustomPredicate + ], + methodImports: [ + java.lang.Math#*, + org.opencypher.gremlin.traversal.CustomPredicate#*, + org.opencypher.gremlin.traversal.CustomFunctions#* + ] + }, + org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: { + files: [scripts/empty-sample.groovy] + } + } + } +} +serializers: + - {className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } + } + - {className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } + } + - {className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV2d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } + } + - {className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } + } +metrics: { + consoleReporter: {enabled: false, interval: 180000}, + csvReporter: {enabled: false, interval: 180000, fileName: ./metrics/gremlin-server-metrics.csv}, + jmxReporter: {enabled: false}, + slf4jReporter: {enabled: false, interval: 180000}, + gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST}, + graphiteReporter: {enabled: false, interval: 180000} +} +maxInitialLineLength: 4096 +maxHeaderSize: 8192 +maxChunkSize: 8192 +maxContentLength: 65536 +maxAccumulationBufferComponents: 1024 +resultIterationBatchSize: 64 +writeBufferLowWaterMark: 32768 +writeBufferHighWaterMark: 65536 +ssl: { + enabled: false +} diff --git a/docker/configs/server2-conf/log4j2.xml b/docker/configs/server2-conf/log4j2.xml new file mode 100644 index 0000000000..f1dd7e8395 --- /dev/null +++ b/docker/configs/server2-conf/log4j2.xml @@ -0,0 +1,144 @@ + + + + + + logs + hugegraph-server + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docker/configs/server2-conf/remote-objects.yaml b/docker/configs/server2-conf/remote-objects.yaml new file mode 100644 index 0000000000..39679d8c30 --- /dev/null +++ b/docker/configs/server2-conf/remote-objects.yaml @@ -0,0 +1,30 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +hosts: [localhost] +port: 8182 +serializer: { + className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, + config: { + serializeResultToString: false, + # The duplication of HugeGraphIoRegistry is meant to fix a bug in the + # 'org.apache.tinkerpop.gremlin.driver.Settings:from(Configuration)' method. + ioRegistries: [ + org.apache.hugegraph.io.HugeGraphIoRegistry, + org.apache.hugegraph.io.HugeGraphIoRegistry + ] + } +} diff --git a/docker/configs/server2-conf/remote.yaml b/docker/configs/server2-conf/remote.yaml new file mode 100644 index 0000000000..55f38ab97d --- /dev/null +++ b/docker/configs/server2-conf/remote.yaml @@ -0,0 +1,25 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +hosts: [localhost] +port: 8182 +serializer: { + className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } +} diff --git a/docker/configs/server2-conf/rest-server.properties b/docker/configs/server2-conf/rest-server.properties new file mode 100644 index 0000000000..0e296b17b4 --- /dev/null +++ b/docker/configs/server2-conf/rest-server.properties @@ -0,0 +1,27 @@ +# bind url +restserver.url=127.0.0.1:8082 +# gremlin server url, need to be consistent with host and port in gremlin-server.yaml +gremlinserver.url=127.0.0.1:8182 + +graphs=./conf/graphs + +# configuration of arthas +arthas.telnet_port=8572 +arthas.http_port=8571 +arthas.ip=127.0.0.1 +arthas.disabled_commands=jad + +# authentication configs +# choose 'org.apache.hugegraph.auth.StandardAuthenticator' or a custom implementation +#auth.authenticator= +# for admin password, By default, it is pa and takes effect upon the first startup +#auth.admin_pa=pa + +# rpc server configs for multi graph-servers or raft-servers +rpc.server_host=127.0.0.1 +rpc.server_port=8092 +#rpc.server_timeout=30 + +# lightweight load balancing (beta) +server.id=server-2 +server.role=worker diff --git a/docker/configs/server3-conf/gremlin-driver-settings.yaml b/docker/configs/server3-conf/gremlin-driver-settings.yaml new file mode 100644 index 0000000000..00ef046699 --- /dev/null +++ b/docker/configs/server3-conf/gremlin-driver-settings.yaml @@ -0,0 +1,25 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +hosts: [localhost] +port: 8183 +serializer: { + className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } +} diff --git a/docker/configs/server3-conf/gremlin-server.yaml b/docker/configs/server3-conf/gremlin-server.yaml new file mode 100644 index 0000000000..e153926bc9 --- /dev/null +++ b/docker/configs/server3-conf/gremlin-server.yaml @@ -0,0 +1,127 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# host and port of gremlin server, need to be consistent with host and port in rest-server.properties +host: 127.0.0.1 +port: 8183 + +# timeout in ms of gremlin query +evaluationTimeout: 30000 + +channelizer: org.apache.tinkerpop.gremlin.server.channel.WsAndHttpChannelizer +# don't set graph at here, this happens after support for dynamically adding graph +graphs: { +} +scriptEngines: { + gremlin-groovy: { + staticImports: [ + org.opencypher.gremlin.process.traversal.CustomPredicates.*', + org.opencypher.gremlin.traversal.CustomFunctions.* + ], + plugins: { + org.apache.hugegraph.plugin.HugeGraphGremlinPlugin: {}, + org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {}, + org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: { + classImports: [ + java.lang.Math, + org.apache.hugegraph.backend.id.IdGenerator, + org.apache.hugegraph.type.define.Directions, + org.apache.hugegraph.type.define.NodeRole, + org.apache.hugegraph.masterelection.GlobalMasterInfo, + org.apache.hugegraph.util.DateUtil, + org.apache.hugegraph.traversal.algorithm.CollectionPathsTraverser, + org.apache.hugegraph.traversal.algorithm.CountTraverser, + org.apache.hugegraph.traversal.algorithm.CustomizedCrosspointsTraverser, + org.apache.hugegraph.traversal.algorithm.CustomizePathsTraverser, + org.apache.hugegraph.traversal.algorithm.FusiformSimilarityTraverser, + org.apache.hugegraph.traversal.algorithm.HugeTraverser, + org.apache.hugegraph.traversal.algorithm.JaccardSimilarTraverser, + org.apache.hugegraph.traversal.algorithm.KneighborTraverser, + org.apache.hugegraph.traversal.algorithm.KoutTraverser, + org.apache.hugegraph.traversal.algorithm.MultiNodeShortestPathTraverser, + org.apache.hugegraph.traversal.algorithm.NeighborRankTraverser, + org.apache.hugegraph.traversal.algorithm.PathsTraverser, + org.apache.hugegraph.traversal.algorithm.PersonalRankTraverser, + org.apache.hugegraph.traversal.algorithm.SameNeighborTraverser, + org.apache.hugegraph.traversal.algorithm.ShortestPathTraverser, + org.apache.hugegraph.traversal.algorithm.SingleSourceShortestPathTraverser, + org.apache.hugegraph.traversal.algorithm.SubGraphTraverser, + org.apache.hugegraph.traversal.algorithm.TemplatePathsTraverser, + org.apache.hugegraph.traversal.algorithm.steps.EdgeStep, + org.apache.hugegraph.traversal.algorithm.steps.RepeatEdgeStep, + org.apache.hugegraph.traversal.algorithm.steps.WeightedEdgeStep, + org.apache.hugegraph.traversal.optimize.ConditionP, + org.apache.hugegraph.traversal.optimize.Text, + org.apache.hugegraph.traversal.optimize.TraversalUtil, + org.opencypher.gremlin.traversal.CustomFunctions, + org.opencypher.gremlin.traversal.CustomPredicate + ], + methodImports: [ + java.lang.Math#*, + org.opencypher.gremlin.traversal.CustomPredicate#*, + org.opencypher.gremlin.traversal.CustomFunctions#* + ] + }, + org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: { + files: [scripts/empty-sample.groovy] + } + } + } +} +serializers: + - {className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } + } + - {className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } + } + - {className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV2d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } + } + - {className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } + } +metrics: { + consoleReporter: {enabled: false, interval: 180000}, + csvReporter: {enabled: false, interval: 180000, fileName: ./metrics/gremlin-server-metrics.csv}, + jmxReporter: {enabled: false}, + slf4jReporter: {enabled: false, interval: 180000}, + gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST}, + graphiteReporter: {enabled: false, interval: 180000} +} +maxInitialLineLength: 4096 +maxHeaderSize: 8192 +maxChunkSize: 8192 +maxContentLength: 65536 +maxAccumulationBufferComponents: 1024 +resultIterationBatchSize: 64 +writeBufferLowWaterMark: 32768 +writeBufferHighWaterMark: 65536 +ssl: { + enabled: false +} diff --git a/docker/configs/server3-conf/log4j2.xml b/docker/configs/server3-conf/log4j2.xml new file mode 100644 index 0000000000..f1dd7e8395 --- /dev/null +++ b/docker/configs/server3-conf/log4j2.xml @@ -0,0 +1,144 @@ + + + + + + logs + hugegraph-server + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docker/configs/server3-conf/remote-objects.yaml b/docker/configs/server3-conf/remote-objects.yaml new file mode 100644 index 0000000000..ce99fcb2f6 --- /dev/null +++ b/docker/configs/server3-conf/remote-objects.yaml @@ -0,0 +1,30 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +hosts: [localhost] +port: 8183 +serializer: { + className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, + config: { + serializeResultToString: false, + # The duplication of HugeGraphIoRegistry is meant to fix a bug in the + # 'org.apache.tinkerpop.gremlin.driver.Settings:from(Configuration)' method. + ioRegistries: [ + org.apache.hugegraph.io.HugeGraphIoRegistry, + org.apache.hugegraph.io.HugeGraphIoRegistry + ] + } +} diff --git a/docker/configs/server3-conf/remote.yaml b/docker/configs/server3-conf/remote.yaml new file mode 100644 index 0000000000..00ef046699 --- /dev/null +++ b/docker/configs/server3-conf/remote.yaml @@ -0,0 +1,25 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +hosts: [localhost] +port: 8183 +serializer: { + className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, + config: { + serializeResultToString: false, + ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry] + } +} diff --git a/docker/configs/server3-conf/rest-server.properties b/docker/configs/server3-conf/rest-server.properties new file mode 100644 index 0000000000..f628dc61b4 --- /dev/null +++ b/docker/configs/server3-conf/rest-server.properties @@ -0,0 +1,26 @@ +# bind url +restserver.url=127.0.0.1:8083 +# gremlin server url, need to be consistent with host and port in gremlin-server.yaml +gremlinserver.url=127.0.0.1:8183 + +graphs=./conf/graphs + +# configuration of arthas +arthas.telnet_port=8582 +arthas.http_port=8581 +arthas.ip=127.0.0.1 +arthas.disabled_commands=jad + +# authentication configs +# choose 'org.apache.hugegraph.auth.StandardAuthenticator' or a custom implementation +#auth.authenticator= +# for admin password, By default, it is pa and takes effect upon the first startup +#auth.admin_pa=pa + +# rpc server configs for multi graph-servers or raft-servers +rpc.server_host=127.0.0.1 +rpc.server_port=8093 + +# lightweight load balancing (beta) +server.id=server-3 +server.role=worker diff --git a/docker/docker-compose-3pd-3store-3server.yml b/docker/docker-compose-3pd-3store-3server.yml index 26610db01f..f704c1c0f6 100644 --- a/docker/docker-compose-3pd-3store-3server.yml +++ b/docker/docker-compose-3pd-3store-3server.yml @@ -15,210 +15,166 @@ # limitations under the License. # -name: hugegraph-3x3 - -networks: - hg-net: - driver: bridge - -volumes: - hg-pd0-data: - hg-pd1-data: - hg-pd2-data: - hg-store0-data: - hg-store1-data: - hg-store2-data: - -# ── Shared service defaults ────────────────────────────────────────── -# TODO: remove volume mounts below once images are published with new entrypoints -x-pd-common: &pd-common - image: hugegraph/pd:${HUGEGRAPH_VERSION:-latest} - pull_policy: missing - restart: unless-stopped - networks: [hg-net] - entrypoint: ["/hugegraph-pd/docker-entrypoint.sh"] - healthcheck: - test: ["CMD-SHELL", "curl -fsS http://localhost:8620/v1/health >/dev/null || exit 1"] - interval: 15s - timeout: 10s - retries: 30 - start_period: 120s - -x-store-common: &store-common - image: hugegraph/store:${HUGEGRAPH_VERSION:-latest} - pull_policy: missing - restart: unless-stopped - networks: [hg-net] - depends_on: - pd0: { condition: service_healthy } - pd1: { condition: service_healthy } - pd2: { condition: service_healthy } - entrypoint: ["/hugegraph-store/docker-entrypoint.sh"] - healthcheck: - test: ["CMD-SHELL", "curl -fsS http://localhost:8520/v1/health >/dev/null || exit 1"] - interval: 15s - timeout: 15s - retries: 40 - start_period: 120s - -x-server-common: &server-common - image: hugegraph/server:${HUGEGRAPH_VERSION:-latest} - pull_policy: missing - restart: unless-stopped - networks: [hg-net] - depends_on: - store0: { condition: service_healthy } - store1: { condition: service_healthy } - store2: { condition: service_healthy } - entrypoint: ["/hugegraph-server/docker-entrypoint.sh"] - environment: - STORE_REST: store0:8520 - HG_SERVER_BACKEND: hstore - HG_SERVER_PD_PEERS: pd0:8686,pd1:8686,pd2:8686 - healthcheck: - test: ["CMD-SHELL", "curl -fsS http://localhost:8080/versions >/dev/null || exit 1"] - interval: 10s - timeout: 5s - retries: 30 - start_period: 60s - -# ── Services ────────────────────────────────────────────────────────── +# TODO: reuse the configs for same type containers +# User could modify the node nums and the port by themselves +version: "3" services: - # --- PD cluster (3 nodes) --- pd0: - <<: *pd-common - container_name: hg-pd0 + image: hugegraph/pd + container_name: pd0 hostname: pd0 - networks: [ hg-net ] - environment: - HG_PD_GRPC_HOST: pd0 - HG_PD_GRPC_PORT: "8686" - HG_PD_REST_PORT: "8620" - HG_PD_RAFT_ADDRESS: pd0:8610 - HG_PD_RAFT_PEERS_LIST: pd0:8610,pd1:8610,pd2:8610 - HG_PD_INITIAL_STORE_LIST: store0:8500,store1:8500,store2:8500 - HG_PD_DATA_PATH: /hugegraph-pd/pd_data - HG_PD_INITIAL_STORE_COUNT: 3 - ports: ["8620:8620", "8686:8686"] + network_mode: host + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8620"] + interval: 10s + timeout: 5s + retries: 3 volumes: - - hg-pd0-data:/hugegraph-pd/pd_data - - ../hugegraph-pd/hg-pd-dist/docker/docker-entrypoint.sh:/hugegraph-pd/docker-entrypoint.sh + - ./configs/application-pd0.yml:/hugegraph-pd/conf/application.yml pd1: - <<: *pd-common - container_name: hg-pd1 + image: hugegraph/pd + container_name: pd1 hostname: pd1 - networks: [ hg-net ] - environment: - HG_PD_GRPC_HOST: pd1 - HG_PD_GRPC_PORT: "8686" - HG_PD_REST_PORT: "8620" - HG_PD_RAFT_ADDRESS: pd1:8610 - HG_PD_RAFT_PEERS_LIST: pd0:8610,pd1:8610,pd2:8610 - HG_PD_INITIAL_STORE_LIST: store0:8500,store1:8500,store2:8500 - HG_PD_DATA_PATH: /hugegraph-pd/pd_data - HG_PD_INITIAL_STORE_COUNT: 3 - ports: ["8621:8620", "8687:8686"] + network_mode: host + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8621"] + interval: 10s + timeout: 5s + retries: 3 volumes: - - hg-pd1-data:/hugegraph-pd/pd_data - - ../hugegraph-pd/hg-pd-dist/docker/docker-entrypoint.sh:/hugegraph-pd/docker-entrypoint.sh + - ./configs/application-pd1.yml:/hugegraph-pd/conf/application.yml pd2: - <<: *pd-common - container_name: hg-pd2 + image: hugegraph/pd + container_name: pd2 hostname: pd2 - networks: [ hg-net ] - environment: - HG_PD_GRPC_HOST: pd2 - HG_PD_GRPC_PORT: "8686" - HG_PD_REST_PORT: "8620" - HG_PD_RAFT_ADDRESS: pd2:8610 - HG_PD_RAFT_PEERS_LIST: pd0:8610,pd1:8610,pd2:8610 - HG_PD_INITIAL_STORE_LIST: store0:8500,store1:8500,store2:8500 - HG_PD_DATA_PATH: /hugegraph-pd/pd_data - HG_PD_INITIAL_STORE_COUNT: 3 - ports: ["8622:8620", "8688:8686"] + network_mode: host + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8622"] + interval: 10s + timeout: 5s + retries: 3 volumes: - - hg-pd2-data:/hugegraph-pd/pd_data - - ../hugegraph-pd/hg-pd-dist/docker/docker-entrypoint.sh:/hugegraph-pd/docker-entrypoint.sh + - ./configs/application-pd2.yml:/hugegraph-pd/conf/application.yml - # --- Store cluster (3 nodes) --- store0: - <<: *store-common - container_name: hg-store0 + image: hugegraph/store + container_name: store0 hostname: store0 - environment: - HG_STORE_PD_ADDRESS: pd0:8686,pd1:8686,pd2:8686 - HG_STORE_GRPC_HOST: store0 - HG_STORE_GRPC_PORT: "8500" - HG_STORE_REST_PORT: "8520" - HG_STORE_RAFT_ADDRESS: store0:8510 - HG_STORE_DATA_PATH: /hugegraph-store/storage - ports: ["8500:8500", "8510:8510", "8520:8520"] + network_mode: host + depends_on: + pd0: + condition: service_healthy + pd1: + condition: service_healthy + pd2: + condition: service_healthy + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8520"] + interval: 10s + timeout: 5s + retries: 3 volumes: - - hg-store0-data:/hugegraph-store/storage - - ../hugegraph-store/hg-store-dist/docker/docker-entrypoint.sh:/hugegraph-store/docker-entrypoint.sh + - ./configs/application-store0.yml:/hugegraph-store/conf/application.yml store1: - <<: *store-common - container_name: hg-store1 + image: hugegraph/store + container_name: store1 hostname: store1 - environment: - HG_STORE_PD_ADDRESS: pd0:8686,pd1:8686,pd2:8686 - HG_STORE_GRPC_HOST: store1 - HG_STORE_GRPC_PORT: "8500" - HG_STORE_REST_PORT: "8520" - HG_STORE_RAFT_ADDRESS: store1:8510 - HG_STORE_DATA_PATH: /hugegraph-store/storage - ports: ["8501:8500", "8511:8510", "8521:8520"] + network_mode: host + depends_on: + pd0: + condition: service_healthy + pd1: + condition: service_healthy + pd2: + condition: service_healthy + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8521"] + interval: 10s + timeout: 5s + retries: 3 volumes: - - hg-store1-data:/hugegraph-store/storage - - ../hugegraph-store/hg-store-dist/docker/docker-entrypoint.sh:/hugegraph-store/docker-entrypoint.sh + - ./configs/application-store1.yml:/hugegraph-store/conf/application.yml store2: - <<: *store-common - container_name: hg-store2 + image: hugegraph/store + container_name: store2 hostname: store2 - environment: - HG_STORE_PD_ADDRESS: pd0:8686,pd1:8686,pd2:8686 - HG_STORE_GRPC_HOST: store2 - HG_STORE_GRPC_PORT: "8500" - HG_STORE_REST_PORT: "8520" - HG_STORE_RAFT_ADDRESS: store2:8510 - HG_STORE_DATA_PATH: /hugegraph-store/storage - ports: ["8502:8500", "8512:8510", "8522:8520"] - volumes: - - hg-store2-data:/hugegraph-store/storage - - ../hugegraph-store/hg-store-dist/docker/docker-entrypoint.sh:/hugegraph-store/docker-entrypoint.sh - - # --- Server cluster (3 nodes) --- - server0: - <<: *server-common - container_name: hg-server0 - hostname: server0 - ports: ["8080:8080"] + network_mode: host + depends_on: + pd0: + condition: service_healthy + pd1: + condition: service_healthy + pd2: + condition: service_healthy + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8522"] + interval: 10s + timeout: 5s + retries: 3 volumes: - - ../hugegraph-server/hugegraph-dist/docker/docker-entrypoint.sh:/hugegraph-server/docker-entrypoint.sh - - ../hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-storage.sh:/hugegraph-server/bin/wait-storage.sh - - ../hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-partition.sh:/hugegraph-server/bin/wait-partition.sh + - ./configs/application-store2.yml:/hugegraph-store/conf/application.yml server1: - <<: *server-common - container_name: hg-server1 + image: hugegraph/server + container_name: server1 hostname: server1 - ports: ["8081:8080"] + network_mode: host + depends_on: + store0: + condition: service_healthy + store1: + condition: service_healthy + store2: + condition: service_healthy + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8081"] + interval: 10s + timeout: 5s + retries: 3 volumes: - - ../hugegraph-server/hugegraph-dist/docker/docker-entrypoint.sh:/hugegraph-server/docker-entrypoint.sh - - ../hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-storage.sh:/hugegraph-server/bin/wait-storage.sh - - ../hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-partition.sh:/hugegraph-server/bin/wait-partition.sh + - ./configs/server1-conf:/hugegraph-server/conf server2: - <<: *server-common - container_name: hg-server2 + image: hugegraph/server + container_name: server2 hostname: server2 - ports: ["8082:8080"] + network_mode: host + depends_on: + store0: + condition: service_healthy + store1: + condition: service_healthy + store2: + condition: service_healthy + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8082"] + interval: 10s + timeout: 5s + retries: 3 + volumes: + - ./configs/server2-conf:/hugegraph-server/conf + + server3: + image: hugegraph/server + container_name: server3 + hostname: server3 + network_mode: host + depends_on: + store0: + condition: service_healthy + store1: + condition: service_healthy + store2: + condition: service_healthy + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8083"] + interval: 10s + timeout: 5s + retries: 3 volumes: - - ../hugegraph-server/hugegraph-dist/docker/docker-entrypoint.sh:/hugegraph-server/docker-entrypoint.sh - - ../hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-storage.sh:/hugegraph-server/bin/wait-storage.sh - - ../hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-partition.sh:/hugegraph-server/bin/wait-partition.sh + - ./configs/server3-conf:/hugegraph-server/conf diff --git a/docker/docker-compose.dev.yml b/docker/docker-compose.dev.yml deleted file mode 100644 index aa0736a38b..0000000000 --- a/docker/docker-compose.dev.yml +++ /dev/null @@ -1,106 +0,0 @@ -# -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - -name: hugegraph-single - -networks: - hg-net: - driver: bridge - -volumes: - hg-pd-data: - hg-store-data: - -services: - pd: - build: - context: .. - dockerfile: hugegraph-pd/Dockerfile - container_name: hg-pd - hostname: pd - restart: unless-stopped - networks: [hg-net] - environment: - HG_PD_GRPC_HOST: pd - HG_PD_GRPC_PORT: "8686" - HG_PD_REST_PORT: "8620" - HG_PD_RAFT_ADDRESS: pd:8610 - HG_PD_RAFT_PEERS_LIST: pd:8610 - HG_PD_INITIAL_STORE_LIST: store:8500 - HG_PD_DATA_PATH: /hugegraph-pd/pd_data - ports: - - "8620:8620" - volumes: - - hg-pd-data:/hugegraph-pd/pd_data - healthcheck: - test: ["CMD-SHELL", "curl -fsS http://localhost:8620/v1/health >/dev/null || exit 1"] - interval: 10s - timeout: 5s - retries: 12 - start_period: 20s - - store: - build: - context: .. - dockerfile: hugegraph-store/Dockerfile - container_name: hg-store - hostname: store - restart: unless-stopped - networks: [hg-net] - depends_on: - pd: - condition: service_healthy - environment: - HG_STORE_PD_ADDRESS: pd:8686 - HG_STORE_GRPC_HOST: store - HG_STORE_GRPC_PORT: "8500" - HG_STORE_REST_PORT: "8520" - HG_STORE_RAFT_ADDRESS: store:8510 - HG_STORE_DATA_PATH: /hugegraph-store/storage - ports: - - "8520:8520" - volumes: - - hg-store-data:/hugegraph-store/storage - healthcheck: - test: ["CMD-SHELL", "curl -fsS http://localhost:8520/v1/health >/dev/null || exit 1"] - interval: 10s - timeout: 10s - retries: 30 - start_period: 30s - - server: - build: - context: .. - dockerfile: hugegraph-server/Dockerfile-hstore - container_name: hg-server - hostname: server - restart: unless-stopped - networks: [hg-net] - depends_on: - store: - condition: service_healthy - environment: - HG_SERVER_BACKEND: hstore - HG_SERVER_PD_PEERS: pd:8686 - ports: - - "8080:8080" - healthcheck: - test: ["CMD-SHELL", "curl -fsS http://localhost:8080/versions >/dev/null || exit 1"] - interval: 10s - timeout: 5s - retries: 30 - start_period: 60s diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml index d3700daf96..0c90c1e451 100644 --- a/docker/docker-compose.yml +++ b/docker/docker-compose.yml @@ -14,119 +14,45 @@ # See the License for the specific language governing permissions and # limitations under the License. # -# TODO: remove volume mounts below once images are published with new entrypoints -name: hugegraph-single -networks: - hg-net: - driver: bridge - -volumes: - hg-pd-data: - hg-store-data: +version: "3" services: - pd: - image: hugegraph/pd:${HUGEGRAPH_VERSION:-latest} - pull_policy: always - container_name: hg-pd + image: hugegraph/pd + container_name: pd hostname: pd - restart: unless-stopped - networks: [hg-net] - - entrypoint: ["/hugegraph-pd/docker-entrypoint.sh"] - - environment: - HG_PD_GRPC_HOST: pd - HG_PD_GRPC_PORT: "8686" - HG_PD_REST_PORT: "8620" - HG_PD_RAFT_ADDRESS: pd:8610 - HG_PD_RAFT_PEERS_LIST: pd:8610 - HG_PD_INITIAL_STORE_LIST: store:8500 - HG_PD_DATA_PATH: /hugegraph-pd/pd_data - - ports: - - "8620:8620" - - volumes: - - hg-pd-data:/hugegraph-pd/pd_data - - ../hugegraph-pd/hg-pd-dist/docker/docker-entrypoint.sh:/hugegraph-pd/docker-entrypoint.sh - + network_mode: host healthcheck: - test: ["CMD-SHELL", "curl -fsS http://localhost:8620/v1/health >/dev/null || exit 1"] + test: ["CMD", "curl", "-f", "http://localhost:8620"] interval: 10s timeout: 5s - retries: 12 - start_period: 30s - + retries: 3 store: - image: hugegraph/store:${HUGEGRAPH_VERSION:-latest} - pull_policy: always - container_name: hg-store + image: hugegraph/store + container_name: store hostname: store - restart: unless-stopped - networks: [hg-net] - - entrypoint: ["/hugegraph-store/docker-entrypoint.sh"] - + network_mode: host depends_on: pd: condition: service_healthy - - environment: - HG_STORE_PD_ADDRESS: pd:8686 - HG_STORE_GRPC_HOST: store - HG_STORE_GRPC_PORT: "8500" - HG_STORE_REST_PORT: "8520" - HG_STORE_RAFT_ADDRESS: store:8510 - HG_STORE_DATA_PATH: /hugegraph-store/storage - - ports: - - "8520:8520" - - volumes: - - hg-store-data:/hugegraph-store/storage - - ../hugegraph-store/hg-store-dist/docker/docker-entrypoint.sh:/hugegraph-store/docker-entrypoint.sh - healthcheck: - test: ["CMD-SHELL", "curl -fsS http://localhost:8520/v1/health >/dev/null || exit 1"] + test: ["CMD", "curl", "-f", "http://localhost:8520"] interval: 10s - timeout: 10s - retries: 30 - start_period: 60s - + timeout: 5s + retries: 3 server: - image: hugegraph/server:${HUGEGRAPH_VERSION:-latest} - pull_policy: always - container_name: hg-server + image: hugegraph/server + container_name: server hostname: server - restart: unless-stopped - networks: [hg-net] - - entrypoint: ["/hugegraph-server/docker-entrypoint.sh"] - + network_mode: host depends_on: store: condition: service_healthy - - environment: - HG_SERVER_BACKEND: hstore - HG_SERVER_PD_PEERS: pd:8686 - - ports: - - "8080:8080" - - volumes: - - ../hugegraph-server/hugegraph-dist/docker/docker-entrypoint.sh:/hugegraph-server/docker-entrypoint.sh - - ../hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-storage.sh:/hugegraph-server/bin/wait-storage.sh - - ../hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-partition.sh:/hugegraph-server/bin/wait-partition.sh - healthcheck: - test: ["CMD-SHELL", "curl -fsS http://localhost:8080/versions >/dev/null || exit 1"] + test: ["CMD", "curl", "-f", "http://localhost:8080"] interval: 10s timeout: 5s - retries: 30 - start_period: 60s + retries: 3 diff --git a/hugegraph-cluster-test/hugegraph-clustertest-dist/src/assembly/static/conf/hugegraph.properties.template b/hugegraph-cluster-test/hugegraph-clustertest-dist/src/assembly/static/conf/hugegraph.properties.template index 005031fe60..f97e365748 100644 --- a/hugegraph-cluster-test/hugegraph-clustertest-dist/src/assembly/static/conf/hugegraph.properties.template +++ b/hugegraph-cluster-test/hugegraph-clustertest-dist/src/assembly/static/conf/hugegraph.properties.template @@ -45,7 +45,6 @@ store=hugegraph pd.peers=$PD_PEERS_LIST$ # task config -task.scheduler_type=local task.schedule_period=10 task.retry=0 task.wait_timeout=10 diff --git a/hugegraph-cluster-test/hugegraph-clustertest-minicluster/src/main/java/org/apache/hugegraph/ct/base/ClusterConstant.java b/hugegraph-cluster-test/hugegraph-clustertest-minicluster/src/main/java/org/apache/hugegraph/ct/base/ClusterConstant.java index 730bbc53ed..9120c0cf92 100644 --- a/hugegraph-cluster-test/hugegraph-clustertest-minicluster/src/main/java/org/apache/hugegraph/ct/base/ClusterConstant.java +++ b/hugegraph-cluster-test/hugegraph-clustertest-minicluster/src/main/java/org/apache/hugegraph/ct/base/ClusterConstant.java @@ -33,12 +33,12 @@ public class ClusterConstant { public static final String PLUGINS_DIR = "plugins"; public static final String BIN_DIR = "bin"; public static final String CONF_DIR = "conf"; - public static final String PD_PACKAGE_PREFIX = "apache-hugegraph-pd"; + public static final String PD_PACKAGE_PREFIX = "apache-hugegraph-pd-incubating"; public static final String PD_JAR_PREFIX = "hg-pd-service"; - public static final String STORE_PACKAGE_PREFIX = "apache-hugegraph-store"; + public static final String STORE_PACKAGE_PREFIX = "apache-hugegraph-store-incubating"; public static final String STORE_JAR_PREFIX = "hg-store-node"; - public static final String SERVER_PACKAGE_PREFIX = "apache-hugegraph-server"; - public static final String CT_PACKAGE_PREFIX = "apache-hugegraph-ct"; + public static final String SERVER_PACKAGE_PREFIX = "apache-hugegraph-server-incubating"; + public static final String CT_PACKAGE_PREFIX = "apache-hugegraph-ct-incubating"; public static final String APPLICATION_FILE = "application.yml"; public static final String SERVER_PROPERTIES = "rest-server.properties"; public static final String HUGEGRAPH_PROPERTIES = "graphs/hugegraph.properties"; diff --git a/hugegraph-cluster-test/hugegraph-clustertest-test/src/main/java/org/apache/hugegraph/MultiClusterTest/BaseMultiClusterTest.java b/hugegraph-cluster-test/hugegraph-clustertest-test/src/main/java/org/apache/hugegraph/MultiClusterTest/BaseMultiClusterTest.java index 9e90933026..af640b3a94 100644 --- a/hugegraph-cluster-test/hugegraph-clustertest-test/src/main/java/org/apache/hugegraph/MultiClusterTest/BaseMultiClusterTest.java +++ b/hugegraph-cluster-test/hugegraph-clustertest-test/src/main/java/org/apache/hugegraph/MultiClusterTest/BaseMultiClusterTest.java @@ -38,7 +38,7 @@ * MultiNode Test generate the cluster env with 3 pd node + 3 store node + 3 server node. * Or you can set different num of nodes by using env = new MultiNodeEnv(pdNum, storeNum, serverNum) * All nodes are deployed in ports generated randomly, the application of nodes are stored - * in /apache-hugegraph-ct-1.7.0, you can visit each node with rest api. + * in /apache-hugegraph-ct-incubating-1.7.0, you can visit each node with rest api. */ public class BaseMultiClusterTest { diff --git a/hugegraph-cluster-test/hugegraph-clustertest-test/src/main/java/org/apache/hugegraph/SimpleClusterTest/BaseSimpleTest.java b/hugegraph-cluster-test/hugegraph-clustertest-test/src/main/java/org/apache/hugegraph/SimpleClusterTest/BaseSimpleTest.java index f0f0c33461..849b4b835f 100644 --- a/hugegraph-cluster-test/hugegraph-clustertest-test/src/main/java/org/apache/hugegraph/SimpleClusterTest/BaseSimpleTest.java +++ b/hugegraph-cluster-test/hugegraph-clustertest-test/src/main/java/org/apache/hugegraph/SimpleClusterTest/BaseSimpleTest.java @@ -45,7 +45,7 @@ /** * Simple Test generate the cluster env with 1 pd node + 1 store node + 1 server node. * All nodes are deployed in ports generated randomly; The application of nodes is stored - * in /apache-hugegraph-ct-1.7.0, you can visit each node with rest api. + * in /apache-hugegraph-ct-incubating-1.7.0, you can visit each node with rest api. */ public class BaseSimpleTest { diff --git a/hugegraph-cluster-test/pom.xml b/hugegraph-cluster-test/pom.xml index cd54ac0ffe..ecb47b7970 100644 --- a/hugegraph-cluster-test/pom.xml +++ b/hugegraph-cluster-test/pom.xml @@ -42,7 +42,7 @@ 11 11 UTF-8 - apache-${release.name}-ct-${project.version} + apache-${release.name}-ct-incubating-${project.version} diff --git a/hugegraph-commons/README.md b/hugegraph-commons/README.md index d8cbcbc24a..7162e93137 100644 --- a/hugegraph-commons/README.md +++ b/hugegraph-commons/README.md @@ -3,8 +3,8 @@ [![License](https://img.shields.io/badge/license-Apache%202-0E78BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html) [![codecov](https://codecov.io/gh/hugegraph/hugegraph-common/branch/master/graph/badge.svg)](https://codecov.io/gh/hugegraph/hugegraph-common) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/org.apache.hugegraph/hugegraph-common/badge.svg)](https://mvnrepository.com/artifact/org.apache.hugegraph/hugegraph-common) -[![CodeQL](https://github.com/apache/hugegraph-commons/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/apache/hugegraph-commons/actions/workflows/codeql-analysis.yml) -[![hugegraph-commons ci](https://github.com/apache/hugegraph-commons/actions/workflows/ci.yml/badge.svg)](https://github.com/apache/hugegraph-commons/actions/workflows/ci.yml) +[![CodeQL](https://github.com/apache/incubator-hugegraph-commons/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/apache/incubator-hugegraph-commons/actions/workflows/codeql-analysis.yml) +[![hugegraph-commons ci](https://github.com/apache/incubator-hugegraph-commons/actions/workflows/ci.yml/badge.svg)](https://github.com/apache/incubator-hugegraph-commons/actions/workflows/ci.yml) hugegraph-commons is a common module for [HugeGraph](https://github.com/apache/hugegraph) and its peripheral components. @@ -49,7 +49,7 @@ And here are links of other repositories: - Note: It's recommended to use [GitHub Desktop](https://desktop.github.com/) to greatly simplify the PR and commit process. - Thank you to all the people who already contributed to HugeGraph! -[![contributors graph](https://contrib.rocks/image?repo=apache/hugegraph-commons)](https://github.com/apache/hugegraph-commons/graphs/contributors) +[![contributors graph](https://contrib.rocks/image?repo=apache/hugegraph-commons)](https://github.com/apache/incubator-hugegraph-commons/graphs/contributors) ## Licence @@ -59,8 +59,8 @@ Same as HugeGraph, hugegraph-commons are also licensed under [Apache 2.0](./LICE --- - - [GitHub Issues](https://github.com/apache/hugegraph-commons/issues): Feedback on usage issues and functional requirements (quick response) + - [GitHub Issues](https://github.com/apache/incubator-hugegraph-commons/issues): Feedback on usage issues and functional requirements (quick response) - Feedback Email: [dev@hugegraph.apache.org](mailto:dev@hugegraph.apache.org) ([subscriber](https://hugegraph.apache.org/docs/contribution-guidelines/subscribe/) only) - WeChat public account: Apache HugeGraph, welcome to scan this QR code to follow us. - QR png + QR png diff --git a/hugegraph-commons/hugegraph-common/pom.xml b/hugegraph-commons/hugegraph-common/pom.xml index 14f7cc217c..a57bcf59cd 100644 --- a/hugegraph-commons/hugegraph-common/pom.xml +++ b/hugegraph-commons/hugegraph-common/pom.xml @@ -28,7 +28,7 @@ hugegraph-common ${project.artifactId} - https://github.com/apache/hugegraph-commons/tree/master/hugegraph-common + https://github.com/apache/incubator-hugegraph-commons/tree/master/hugegraph-common hugegraph-common is a common module for HugeGraph and its peripheral components. hugegraph-common encapsulates locks, configurations, events, iterators, rest and some diff --git a/hugegraph-commons/hugegraph-common/src/test/java/org/apache/hugegraph/unit/rest/RestClientTest.java b/hugegraph-commons/hugegraph-common/src/test/java/org/apache/hugegraph/unit/rest/RestClientTest.java index 93a69dd8ec..712aea7ab2 100644 --- a/hugegraph-commons/hugegraph-common/src/test/java/org/apache/hugegraph/unit/rest/RestClientTest.java +++ b/hugegraph-commons/hugegraph-common/src/test/java/org/apache/hugegraph/unit/rest/RestClientTest.java @@ -112,7 +112,7 @@ public void testPostWithTokenAndAllParams() { @Test public void testPostHttpsWithAllParams() { - String url = "https://github.com/apache/hugegraph-doc/" + + String url = "https://github.com/apache/incubator-hugegraph-doc/" + "raw/master/dist/commons/cacerts.jks"; String trustStoreFile = "src/test/resources/cacerts.jks"; BaseUnitTest.downloadFileByUrl(url, trustStoreFile); @@ -129,7 +129,7 @@ public void testPostHttpsWithAllParams() { @Test public void testPostHttpsWithTokenAndAllParams() { - String url = "https://github.com/apache/hugegraph-doc/" + + String url = "https://github.com/apache/incubator-hugegraph-doc/" + "raw/master/dist/commons/cacerts.jks"; String trustStoreFile = "src/test/resources/cacerts.jks"; BaseUnitTest.downloadFileByUrl(url, trustStoreFile); diff --git a/hugegraph-commons/pom.xml b/hugegraph-commons/pom.xml index b9e780bd32..59d12b99ad 100644 --- a/hugegraph-commons/pom.xml +++ b/hugegraph-commons/pom.xml @@ -50,7 +50,7 @@ - Apache HugeGraph + Apache Hugegraph(Incubating) dev-subscribe@hugegraph.apache.org https://hugegraph.apache.org/ @@ -61,7 +61,7 @@ Developer List dev-subscribe@hugegraph.apache.org dev-unsubscribe@hugegraph.apache.org - dev@hugegraph.apache.org + dev@hugegraph.incubator.apache.org Commits List diff --git a/hugegraph-pd/AGENTS.md b/hugegraph-pd/AGENTS.md index c9ba2bcfa0..0b501bf640 100644 --- a/hugegraph-pd/AGENTS.md +++ b/hugegraph-pd/AGENTS.md @@ -110,7 +110,7 @@ mvn clean install # Build distribution package only mvn clean package -pl hg-pd-dist -am -DskipTests -# Output: hugegraph-pd/apache-hugegraph-pd-.tar.gz +# Output: hg-pd-dist/target/apache-hugegraph-pd-incubating-.tar.gz ``` ### Running Tests @@ -165,7 +165,7 @@ mvn clean After building, extract the tarball: ``` -apache-hugegraph-pd-/ +apache-hugegraph-pd-incubating-/ ├── bin/ │ ├── start-hugegraph-pd.sh # Start PD server │ ├── stop-hugegraph-pd.sh # Stop PD server @@ -183,7 +183,7 @@ apache-hugegraph-pd-/ ### Starting PD ```bash -cd apache-hugegraph-pd-/ +cd apache-hugegraph-pd-incubating-/ bin/start-hugegraph-pd.sh # With custom GC options diff --git a/hugegraph-pd/Dockerfile b/hugegraph-pd/Dockerfile index 812e05e7d9..c30cc3dfe2 100644 --- a/hugegraph-pd/Dockerfile +++ b/hugegraph-pd/Dockerfile @@ -30,7 +30,7 @@ RUN mvn package $MAVEN_ARGS -e -B -ntp -Dmaven.test.skip=true -Dmaven.javadoc.sk # Note: ZGC (The Z Garbage Collector) is only supported on ARM-Mac with java > 13 FROM eclipse-temurin:11-jre-jammy -COPY --from=build /pkg/hugegraph-pd/apache-hugegraph-pd-*/ /hugegraph-pd/ +COPY --from=build /pkg/hugegraph-pd/apache-hugegraph-pd-incubating-*/ /hugegraph-pd/ LABEL maintainer="HugeGraph Docker Maintainers " # TODO: use g1gc or zgc as default diff --git a/hugegraph-pd/hg-pd-core/src/main/java/org/apache/hugegraph/pd/raft/RaftEngine.java b/hugegraph-pd/hg-pd-core/src/main/java/org/apache/hugegraph/pd/raft/RaftEngine.java index 2b08de7d4e..e70ac92340 100644 --- a/hugegraph-pd/hg-pd-core/src/main/java/org/apache/hugegraph/pd/raft/RaftEngine.java +++ b/hugegraph-pd/hg-pd-core/src/main/java/org/apache/hugegraph/pd/raft/RaftEngine.java @@ -23,12 +23,9 @@ import java.util.HashSet; import java.util.List; import java.util.Objects; -import java.util.Set; import java.util.concurrent.CompletableFuture; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; import java.util.concurrent.atomic.AtomicReference; import java.util.stream.Collectors; @@ -43,6 +40,7 @@ import com.alipay.sofa.jraft.JRaftUtils; import com.alipay.sofa.jraft.Node; import com.alipay.sofa.jraft.RaftGroupService; +import com.alipay.sofa.jraft.ReplicatorGroup; import com.alipay.sofa.jraft.Status; import com.alipay.sofa.jraft.conf.Configuration; import com.alipay.sofa.jraft.core.Replicator; @@ -50,11 +48,13 @@ import com.alipay.sofa.jraft.entity.Task; import com.alipay.sofa.jraft.error.RaftError; import com.alipay.sofa.jraft.option.NodeOptions; +import com.alipay.sofa.jraft.option.RaftOptions; import com.alipay.sofa.jraft.option.RpcOptions; import com.alipay.sofa.jraft.rpc.RaftRpcServerFactory; import com.alipay.sofa.jraft.rpc.RpcServer; import com.alipay.sofa.jraft.rpc.impl.BoltRpcServer; import com.alipay.sofa.jraft.util.Endpoint; +import com.alipay.sofa.jraft.util.ThreadId; import com.alipay.sofa.jraft.util.internal.ThrowUtil; import io.netty.channel.ChannelHandler; @@ -86,12 +86,8 @@ public synchronized boolean init(PDConfig.Raft config) { } this.config = config; - // Wire configured rpc timeout into RaftRpcClient so the Bolt transport - // timeout and the future.get() caller timeout in getLeaderGrpcAddress() are consistent. raftRpcClient = new RaftRpcClient(); - RpcOptions rpcOptions = new RpcOptions(); - rpcOptions.setRpcDefaultTimeout(config.getRpcTimeout()); - raftRpcClient.init(rpcOptions); + raftRpcClient.init(new RpcOptions()); String raftPath = config.getDataPath() + "/" + groupId; new File(raftPath).mkdirs(); @@ -123,7 +119,10 @@ public synchronized boolean init(PDConfig.Raft config) { nodeOptions.setRpcConnectTimeoutMs(config.getRpcTimeout()); nodeOptions.setRpcDefaultTimeout(config.getRpcTimeout()); nodeOptions.setRpcInstallSnapshotTimeout(config.getRpcTimeout()); - // TODO: tune RaftOptions for PD (see hugegraph-store PartitionEngine for reference) + // Set the raft configuration + RaftOptions raftOptions = nodeOptions.getRaftOptions(); + + nodeOptions.setEnableMetrics(true); final PeerId serverId = JRaftUtils.getPeerId(config.getAddress()); @@ -229,7 +228,7 @@ public PeerId getLeader() { } /** - * Send a message to the leader to get the grpc address. + * Send a message to the leader to get the grpc address; */ public String getLeaderGrpcAddress() throws ExecutionException, InterruptedException { if (isLeader()) { @@ -237,49 +236,11 @@ public String getLeaderGrpcAddress() throws ExecutionException, InterruptedExcep } if (raftNode.getLeaderId() == null) { - waitingForLeader(config.getRpcTimeout()); - } - - // Cache leader to avoid repeated getLeaderId() calls and guard against - // waitingForLeader() returning without a leader being elected. - PeerId leader = raftNode.getLeaderId(); - if (leader == null) { - throw new ExecutionException(new IllegalStateException("Leader is not ready")); - } - - RaftRpcProcessor.GetMemberResponse response = null; - try { - // TODO: a more complete fix would need a source of truth for the leader's - // actual grpcAddress rather than deriving it from the local node's port config. - response = raftRpcClient - .getGrpcAddress(leader.getEndpoint().toString()) - .get(config.getRpcTimeout(), TimeUnit.MILLISECONDS); - if (response != null && response.getGrpcAddress() != null) { - return response.getGrpcAddress(); - } - if (response == null) { - log.warn("Leader RPC response is null for {}, falling back to derived address", - leader); - } else { - log.warn("Leader gRPC address field is null in RPC response for {}, " - + "falling back to derived address", leader); - } - } catch (TimeoutException e) { - log.warn("Timed out resolving leader gRPC address for {}, falling back to derived " - + "address", leader); - } catch (ExecutionException e) { - Throwable cause = e.getCause() != null ? e.getCause() : e; - log.warn("Failed to resolve leader gRPC address for {}, falling back to derived " - + "address", leader, cause); + waitingForLeader(10000); } - // Best-effort fallback: derive from leader raft endpoint IP + local gRPC port. - // WARNING: this may be incorrect in clusters where PD nodes use different grpc.port - // values, a proper fix requires a cluster-wide source of truth for gRPC addresses. - String derived = leader.getEndpoint().getIp() + ":" + config.getGrpcPort(); - log.info("Using derived leader gRPC address {} - may be incorrect if nodes use different ports", - derived); - return derived; + return raftRpcClient.getGrpcAddress(raftNode.getLeaderId().getEndpoint().toString()).get() + .getGrpcAddress(); } /** @@ -352,55 +313,23 @@ public List getMembers() throws ExecutionException, InterruptedEx public Status changePeerList(String peerList) { AtomicReference result = new AtomicReference<>(); - Configuration newPeers = new Configuration(); try { String[] peers = peerList.split(",", -1); if ((peers.length & 1) != 1) { throw new PDException(-1, "the number of peer list must be odd."); } + Configuration newPeers = new Configuration(); newPeers.parse(peerList); CountDownLatch latch = new CountDownLatch(1); this.raftNode.changePeers(newPeers, status -> { - result.compareAndSet(null, status); - if (status != null && status.isOk()) { - IpAuthHandler handler = IpAuthHandler.getInstance(); - if (handler != null) { - Set newIps = newPeers.getPeers() - .stream() - .map(PeerId::getIp) - .collect(Collectors.toSet()); - handler.refresh(newIps); - log.info("IpAuthHandler refreshed after peer list change to: {}", - peerList); - } else { - log.warn("IpAuthHandler not initialized, skipping refresh for " - + "peer list: {}", peerList); - } - } + result.set(status); latch.countDown(); }); - boolean completed = latch.await(3L * config.getRpcTimeout(), TimeUnit.MILLISECONDS); - if (!completed && result.get() == null) { - Status timeoutStatus = new Status(RaftError.EINTERNAL, - "changePeerList timed out after %d ms", - 3L * config.getRpcTimeout()); - if (!result.compareAndSet(null, timeoutStatus)) { - timeoutStatus = null; - } - if (timeoutStatus != null) { - log.error("changePeerList to {} timed out after {} ms", - peerList, 3L * config.getRpcTimeout()); - } - } - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - result.set(new Status(RaftError.EINTERNAL, "changePeerList interrupted")); - log.error("changePeerList to {} was interrupted", peerList, e); + latch.await(); } catch (Exception e) { log.error("failed to changePeerList to {},{}", peerList, e); result.set(new Status(-1, e.getMessage())); } - return result.get(); } @@ -415,8 +344,7 @@ public PeerId waitingForLeader(long timeOut) { long start = System.currentTimeMillis(); while ((System.currentTimeMillis() - start < timeOut) && (leader == null)) { try { - long remaining = timeOut - (System.currentTimeMillis() - start); - this.wait(Math.min(1000, Math.max(0, remaining))); + this.wait(1000); } catch (InterruptedException e) { log.error("Raft wait for leader exception", e); } @@ -424,6 +352,7 @@ public PeerId waitingForLeader(long timeOut) { } return leader; } + } public Node getRaftNode() { @@ -437,8 +366,7 @@ private boolean peerEquals(PeerId p1, PeerId p2) { if (p1 == null || p2 == null) { return false; } - return Objects.equals(p1.getIp(), p2.getIp()) && - Objects.equals(p1.getPort(), p2.getPort()); + return Objects.equals(p1.getIp(), p2.getIp()) && Objects.equals(p1.getPort(), p2.getPort()); } private Replicator.State getReplicatorState(PeerId peerId) { diff --git a/hugegraph-pd/hg-pd-core/src/main/java/org/apache/hugegraph/pd/raft/auth/IpAuthHandler.java b/hugegraph-pd/hg-pd-core/src/main/java/org/apache/hugegraph/pd/raft/auth/IpAuthHandler.java index bdccb6dd7f..2ac384541d 100644 --- a/hugegraph-pd/hg-pd-core/src/main/java/org/apache/hugegraph/pd/raft/auth/IpAuthHandler.java +++ b/hugegraph-pd/hg-pd-core/src/main/java/org/apache/hugegraph/pd/raft/auth/IpAuthHandler.java @@ -17,11 +17,8 @@ package org.apache.hugegraph.pd.raft.auth; -import java.net.InetAddress; import java.net.InetSocketAddress; -import java.net.UnknownHostException; import java.util.Collections; -import java.util.HashSet; import java.util.Set; import io.netty.channel.ChannelDuplexHandler; @@ -33,11 +30,11 @@ @ChannelHandler.Sharable public class IpAuthHandler extends ChannelDuplexHandler { - private volatile Set resolvedIps; + private final Set allowedIps; private static volatile IpAuthHandler instance; private IpAuthHandler(Set allowedIps) { - this.resolvedIps = resolveAll(allowedIps); + this.allowedIps = Collections.unmodifiableSet(allowedIps); } public static IpAuthHandler getInstance(Set allowedIps) { @@ -51,25 +48,6 @@ public static IpAuthHandler getInstance(Set allowedIps) { return instance; } - /** - * Returns the existing singleton instance, or null if not yet initialized. - * Should only be called after getInstance(Set) has been called during startup. - */ - public static IpAuthHandler getInstance() { - return instance; - } - - /** - * Refreshes the resolved IP allowlist from a new set of hostnames or IPs. - * Should be called when the Raft peer list changes via RaftEngine#changePeerList(). - * Note: DNS-only changes (e.g. container restart with new IP, same hostname) - * are not automatically detected and still require a process restart. - */ - public void refresh(Set newAllowedIps) { - this.resolvedIps = resolveAll(newAllowedIps); - log.info("IpAuthHandler allowlist refreshed, resolved {} entries", resolvedIps.size()); - } - @Override public void channelActive(ChannelHandlerContext ctx) throws Exception { String clientIp = getClientIp(ctx); @@ -87,25 +65,7 @@ private static String getClientIp(ChannelHandlerContext ctx) { } private boolean isIpAllowed(String ip) { - Set resolved = this.resolvedIps; - // Empty allowlist means no restriction is configured — allow all - return resolved.isEmpty() || resolved.contains(ip); - } - - private static Set resolveAll(Set entries) { - Set result = new HashSet<>(entries); - - for (String entry : entries) { - try { - for (InetAddress addr : InetAddress.getAllByName(entry)) { - result.add(addr.getHostAddress()); - } - } catch (UnknownHostException e) { - log.warn("Could not resolve allowlist entry '{}': {}", entry, e.getMessage()); - } - } - - return Collections.unmodifiableSet(result); + return allowedIps.isEmpty() || allowedIps.contains(ip); } @Override diff --git a/hugegraph-pd/hg-pd-dist/docker/docker-entrypoint.sh b/hugegraph-pd/hg-pd-dist/docker/docker-entrypoint.sh old mode 100755 new mode 100644 index d1ae5c3c3a..fd894d5518 --- a/hugegraph-pd/hg-pd-dist/docker/docker-entrypoint.sh +++ b/hugegraph-pd/hg-pd-dist/docker/docker-entrypoint.sh @@ -15,72 +15,8 @@ # See the License for the specific language governing permissions and # limitations under the License. # -set -euo pipefail -log() { echo "[hugegraph-pd-entrypoint] $*"; } +# start hugegraph pd +./bin/start-hugegraph-pd.sh -j "$JAVA_OPTS" -require_env() { - local name="$1" - if [[ -z "${!name:-}" ]]; then - echo "ERROR: missing required env '${name}'" >&2; exit 2 - fi -} - -json_escape() { - local s="$1" - s=${s//\\/\\\\}; s=${s//\"/\\\"}; s=${s//$'\n'/} - printf "%s" "$s" -} - -migrate_env() { - local old_name="$1" new_name="$2" - - if [[ -n "${!old_name:-}" && -z "${!new_name:-}" ]]; then - log "WARN: deprecated env '${old_name}' detected; mapping to '${new_name}'" - export "${new_name}=${!old_name}" - fi -} - -migrate_env "GRPC_HOST" "HG_PD_GRPC_HOST" -migrate_env "RAFT_ADDRESS" "HG_PD_RAFT_ADDRESS" -migrate_env "RAFT_PEERS" "HG_PD_RAFT_PEERS_LIST" -migrate_env "PD_INITIAL_STORE_LIST" "HG_PD_INITIAL_STORE_LIST" - -# ── Required vars ───────────────────────────────────────────────────── -require_env "HG_PD_GRPC_HOST" -require_env "HG_PD_RAFT_ADDRESS" -require_env "HG_PD_RAFT_PEERS_LIST" -require_env "HG_PD_INITIAL_STORE_LIST" - -: "${HG_PD_GRPC_PORT:=8686}" -: "${HG_PD_REST_PORT:=8620}" -: "${HG_PD_DATA_PATH:=/hugegraph-pd/pd_data}" -: "${HG_PD_INITIAL_STORE_COUNT:=1}" - -SPRING_APPLICATION_JSON="$(cat < { if (status.isOk()) { log.info("updatePdRaft, change peers success"); - // Refresh IpAuthHandler so newly added peers are not blocked - IpAuthHandler handler = IpAuthHandler.getInstance(); - if (handler != null) { - Set newIps = new HashSet<>(); - config.getPeers().forEach(p -> newIps.add(p.getIp())); - config.getLearners().forEach(p -> newIps.add(p.getIp())); - handler.refresh(newIps); - log.info("IpAuthHandler refreshed after updatePdRaft peer change"); - } else { - log.warn("IpAuthHandler not initialized, skipping refresh"); - } } else { log.error("changePeers status: {}, msg:{}, code: {}, raft error:{}", status, status.getErrorMsg(), status.getCode(), diff --git a/hugegraph-pd/hg-pd-service/src/main/java/org/apache/hugegraph/pd/service/interceptor/Authentication.java b/hugegraph-pd/hg-pd-service/src/main/java/org/apache/hugegraph/pd/service/interceptor/Authentication.java index 48bcf38683..83901bca1a 100644 --- a/hugegraph-pd/hg-pd-service/src/main/java/org/apache/hugegraph/pd/service/interceptor/Authentication.java +++ b/hugegraph-pd/hg-pd-service/src/main/java/org/apache/hugegraph/pd/service/interceptor/Authentication.java @@ -77,8 +77,6 @@ protected T authenticate(String authority, String token, Function } String name = info.substring(0, delim); - // TODO: password validation is skipped — only service name is checked against - // innerModules. Full credential validation should be added as part of the auth refactor. //String pwd = info.substring(delim + 1); if (innerModules.contains(name)) { return call.get(); diff --git a/hugegraph-pd/hg-pd-service/src/main/java/org/apache/hugegraph/pd/util/grpc/GRpcServerConfig.java b/hugegraph-pd/hg-pd-service/src/main/java/org/apache/hugegraph/pd/util/grpc/GRpcServerConfig.java index 2b1103739b..fce6d2379d 100644 --- a/hugegraph-pd/hg-pd-service/src/main/java/org/apache/hugegraph/pd/util/grpc/GRpcServerConfig.java +++ b/hugegraph-pd/hg-pd-service/src/main/java/org/apache/hugegraph/pd/util/grpc/GRpcServerConfig.java @@ -40,8 +40,6 @@ public void configure(ServerBuilder serverBuilder) { HgExecutorUtil.createExecutor(EXECUTOR_NAME, poolGrpc.getCore(), poolGrpc.getMax(), poolGrpc.getQueue())); serverBuilder.maxInboundMessageSize(MAX_INBOUND_MESSAGE_SIZE); - // TODO: GrpcAuthentication is instantiated as a Spring bean but never registered - // here — add serverBuilder.intercept(grpcAuthentication) once auth is refactored. } } diff --git a/hugegraph-pd/hg-pd-test/src/main/java/org/apache/hugegraph/pd/core/PDCoreSuiteTest.java b/hugegraph-pd/hg-pd-test/src/main/java/org/apache/hugegraph/pd/core/PDCoreSuiteTest.java index 87d1500bcb..5098645128 100644 --- a/hugegraph-pd/hg-pd-test/src/main/java/org/apache/hugegraph/pd/core/PDCoreSuiteTest.java +++ b/hugegraph-pd/hg-pd-test/src/main/java/org/apache/hugegraph/pd/core/PDCoreSuiteTest.java @@ -19,9 +19,6 @@ import org.apache.hugegraph.pd.core.meta.MetadataKeyHelperTest; import org.apache.hugegraph.pd.core.store.HgKVStoreImplTest; -import org.apache.hugegraph.pd.raft.IpAuthHandlerTest; -import org.apache.hugegraph.pd.raft.RaftEngineIpAuthIntegrationTest; -import org.apache.hugegraph.pd.raft.RaftEngineLeaderAddressTest; import org.junit.runner.RunWith; import org.junit.runners.Suite; @@ -39,9 +36,6 @@ StoreMonitorDataServiceTest.class, StoreServiceTest.class, TaskScheduleServiceTest.class, - IpAuthHandlerTest.class, - RaftEngineIpAuthIntegrationTest.class, - RaftEngineLeaderAddressTest.class, // StoreNodeServiceTest.class, }) @Slf4j diff --git a/hugegraph-pd/hg-pd-test/src/main/java/org/apache/hugegraph/pd/raft/IpAuthHandlerTest.java b/hugegraph-pd/hg-pd-test/src/main/java/org/apache/hugegraph/pd/raft/IpAuthHandlerTest.java deleted file mode 100644 index 31647b6d39..0000000000 --- a/hugegraph-pd/hg-pd-test/src/main/java/org/apache/hugegraph/pd/raft/IpAuthHandlerTest.java +++ /dev/null @@ -1,133 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hugegraph.pd.raft; - -import java.net.InetAddress; -import java.util.Collections; -import java.util.HashSet; -import java.util.Set; - -import org.apache.hugegraph.pd.raft.auth.IpAuthHandler; -import org.apache.hugegraph.testutil.Whitebox; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; - -public class IpAuthHandlerTest { - - @Before - public void setUp() { - // Must reset BEFORE each test — earlier suite classes (e.g. ConfigServiceTest) - // initialize RaftEngine which creates the IpAuthHandler singleton with their - // own peer IPs. Without this reset, our getInstance() calls return the stale - // singleton and ignore the allowlist passed by the test. - Whitebox.setInternalState(IpAuthHandler.class, "instance", null); - } - - @After - public void tearDown() { - // Must reset AFTER each test — prevents our test singleton from leaking - // into later suite classes that also depend on IpAuthHandler state. - Whitebox.setInternalState(IpAuthHandler.class, "instance", null); - } - - private boolean isIpAllowed(IpAuthHandler handler, String ip) { - return Whitebox.invoke(IpAuthHandler.class, - new Class[]{String.class}, - "isIpAllowed", handler, ip); - } - - @Test - public void testHostnameResolvesToIp() throws Exception { - // "localhost" should resolve to one or more IPs via InetAddress.getAllByName() - // This verifies the core fix: hostname allowlists match numeric remote addresses - // Using dynamic resolution avoids hardcoding "127.0.0.1" which may not be - // returned on IPv6-only or custom resolver environments - IpAuthHandler handler = IpAuthHandler.getInstance( - Collections.singleton("localhost")); - InetAddress[] addresses = InetAddress.getAllByName("localhost"); - // All resolved addresses should be allowed — resolveAll() adds every address - // returned by getAllByName() so none should be blocked - Assert.assertTrue("Expected at least one resolved address", - addresses.length > 0); - for (InetAddress address : addresses) { - Assert.assertTrue( - "Expected " + address.getHostAddress() + " to be allowed", - isIpAllowed(handler, address.getHostAddress())); - } - } - - @Test - public void testUnresolvableHostnameDoesNotCrash() { - // Should log a warning and skip — no exception thrown during construction - // Uses .invalid TLD which is RFC-2606 reserved and guaranteed to never resolve - IpAuthHandler handler = IpAuthHandler.getInstance( - Collections.singleton("nonexistent.invalid")); - // Handler was still created successfully despite bad hostname - Assert.assertNotNull(handler); - // Unresolvable entry is skipped so no IPs should be allowed - Assert.assertFalse(isIpAllowed(handler, "127.0.0.1")); - Assert.assertFalse(isIpAllowed(handler, "192.168.0.1")); - } - - @Test - public void testRefreshUpdatesResolvedIps() { - // Start with 127.0.0.1 - IpAuthHandler handler = IpAuthHandler.getInstance( - Collections.singleton("127.0.0.1")); - Assert.assertTrue(isIpAllowed(handler, "127.0.0.1")); - - // Refresh with a different IP — verifies refresh() swaps the set correctly - Set newIps = new HashSet<>(); - newIps.add("192.168.0.1"); - handler.refresh(newIps); - - // Old IP should no longer be allowed - Assert.assertFalse(isIpAllowed(handler, "127.0.0.1")); - // New IP should now be allowed - Assert.assertTrue(isIpAllowed(handler, "192.168.0.1")); - } - - @Test - public void testEmptyAllowlistAllowsAll() { - // Empty allowlist = no restriction configured = allow all connections - // This is intentional fallback behavior and must be explicitly tested - // because it is a security-relevant boundary - IpAuthHandler handler = IpAuthHandler.getInstance( - Collections.emptySet()); - Assert.assertTrue(isIpAllowed(handler, "1.2.3.4")); - Assert.assertTrue(isIpAllowed(handler, "192.168.99.99")); - } - - @Test - public void testGetInstanceReturnsSingletonIgnoresNewAllowlist() { - // First call creates the singleton with 127.0.0.1 - IpAuthHandler first = IpAuthHandler.getInstance( - Collections.singleton("127.0.0.1")); - // Second call with a different set must return the same instance - // and must NOT reinitialize or override the existing allowlist - IpAuthHandler second = IpAuthHandler.getInstance( - Collections.singleton("192.168.0.1")); - Assert.assertSame(first, second); - // Original allowlist still in effect - Assert.assertTrue(isIpAllowed(second, "127.0.0.1")); - // New set was ignored — 192.168.0.1 should not be allowed - Assert.assertFalse(isIpAllowed(second, "192.168.0.1")); - } -} diff --git a/hugegraph-pd/hg-pd-test/src/main/java/org/apache/hugegraph/pd/raft/RaftEngineIpAuthIntegrationTest.java b/hugegraph-pd/hg-pd-test/src/main/java/org/apache/hugegraph/pd/raft/RaftEngineIpAuthIntegrationTest.java deleted file mode 100644 index 1f9857df0f..0000000000 --- a/hugegraph-pd/hg-pd-test/src/main/java/org/apache/hugegraph/pd/raft/RaftEngineIpAuthIntegrationTest.java +++ /dev/null @@ -1,124 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hugegraph.pd.raft; - -import java.util.Collections; - -import org.apache.hugegraph.pd.raft.auth.IpAuthHandler; -import org.apache.hugegraph.testutil.Whitebox; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; - -import com.alipay.sofa.jraft.Closure; -import com.alipay.sofa.jraft.Node; -import com.alipay.sofa.jraft.Status; -import com.alipay.sofa.jraft.conf.Configuration; -import com.alipay.sofa.jraft.error.RaftError; - -import static org.mockito.ArgumentMatchers.any; -import static org.mockito.Mockito.doAnswer; -import static org.mockito.Mockito.mock; - -public class RaftEngineIpAuthIntegrationTest { - - private Node originalRaftNode; - - @Before - public void setUp() { - // Save original raftNode so we can restore it after the test - originalRaftNode = RaftEngine.getInstance().getRaftNode(); - // Reset IpAuthHandler singleton for a clean state - Whitebox.setInternalState(IpAuthHandler.class, "instance", null); - } - - @After - public void tearDown() { - // Restore original raftNode - Whitebox.setInternalState(RaftEngine.getInstance(), "raftNode", originalRaftNode); - // Reset IpAuthHandler singleton - Whitebox.setInternalState(IpAuthHandler.class, "instance", null); - } - - @Test - public void testChangePeerListRefreshesIpAuthHandler() throws Exception { - // Initialize IpAuthHandler with an old IP - IpAuthHandler handler = IpAuthHandler.getInstance( - Collections.singleton("10.0.0.1")); - Assert.assertTrue(invokeIsIpAllowed(handler, "10.0.0.1")); - Assert.assertFalse(invokeIsIpAllowed(handler, "127.0.0.1")); - - // Mock Node to fire the changePeers callback synchronously with Status.OK() - // This simulates a successful peer change without a real Raft cluster - - // Important: fire the closure synchronously or changePeerList() will - // block on latch.await(...) until the configured timeout elapses - Node mockNode = mock(Node.class); - doAnswer(invocation -> { - Closure closure = invocation.getArgument(1); - closure.run(Status.OK()); - return null; - }).when(mockNode).changePeers(any(Configuration.class), any(Closure.class)); - - // Inject mock node into RaftEngine - Whitebox.setInternalState(RaftEngine.getInstance(), "raftNode", mockNode); - - // Call changePeerList with new peer — must be odd count - RaftEngine.getInstance().changePeerList("127.0.0.1:8610"); - - // Verify IpAuthHandler was refreshed with the new peer IP - Assert.assertTrue(invokeIsIpAllowed(handler, "127.0.0.1")); - // Old IP should no longer be allowed - Assert.assertFalse(invokeIsIpAllowed(handler, "10.0.0.1")); - } - - @Test - public void testChangePeerListDoesNotRefreshOnFailure() throws Exception { - // Initialize IpAuthHandler with original IP - IpAuthHandler handler = IpAuthHandler.getInstance( - Collections.singleton("10.0.0.1")); - Assert.assertTrue(invokeIsIpAllowed(handler, "10.0.0.1")); - - // Mock Node to fire callback with a failed status - // Simulates a failed peer change — handler should NOT be refreshed - - // Important: fire the closure synchronously or changePeerList() will - // block on latch.await(...) until the configured timeout elapses - Node mockNode = mock(Node.class); - doAnswer(invocation -> { - Closure closure = invocation.getArgument(1); - closure.run(new Status(RaftError.EINTERNAL, "simulated failure")); - return null; - }).when(mockNode).changePeers(any(Configuration.class), any(Closure.class)); - - Whitebox.setInternalState(RaftEngine.getInstance(), "raftNode", mockNode); - - RaftEngine.getInstance().changePeerList("127.0.0.1:8610"); - - // Handler should NOT be refreshed — old IP still allowed - Assert.assertTrue(invokeIsIpAllowed(handler, "10.0.0.1")); - Assert.assertFalse(invokeIsIpAllowed(handler, "127.0.0.1")); - } - - private boolean invokeIsIpAllowed(IpAuthHandler handler, String ip) { - return Whitebox.invoke(IpAuthHandler.class, - new Class[]{String.class}, - "isIpAllowed", handler, ip); - } -} diff --git a/hugegraph-pd/hg-pd-test/src/main/java/org/apache/hugegraph/pd/raft/RaftEngineLeaderAddressTest.java b/hugegraph-pd/hg-pd-test/src/main/java/org/apache/hugegraph/pd/raft/RaftEngineLeaderAddressTest.java deleted file mode 100644 index 420b106a27..0000000000 --- a/hugegraph-pd/hg-pd-test/src/main/java/org/apache/hugegraph/pd/raft/RaftEngineLeaderAddressTest.java +++ /dev/null @@ -1,183 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hugegraph.pd.raft; - -import java.util.concurrent.CompletableFuture; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; - -import org.apache.hugegraph.pd.config.PDConfig; -import org.apache.hugegraph.testutil.Whitebox; -import org.junit.After; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; - -import com.alipay.sofa.jraft.Node; -import com.alipay.sofa.jraft.entity.PeerId; -import com.alipay.sofa.jraft.util.Endpoint; - -import static org.mockito.ArgumentMatchers.anyLong; -import static org.mockito.ArgumentMatchers.anyString; -import static org.mockito.ArgumentMatchers.eq; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.when; - -public class RaftEngineLeaderAddressTest { - - private static final String LEADER_IP = "10.0.0.1"; - private static final int GRPC_PORT = 8686; - private static final String LEADER_GRPC_ADDRESS = "10.0.0.1:8686"; - - private Node originalRaftNode; - private RaftRpcClient originalRaftRpcClient; - private PDConfig.Raft originalConfig; - - private Node mockNode; - private RaftRpcClient mockRpcClient; - private PDConfig.Raft mockConfig; - private PeerId mockLeader; - - @Before - public void setUp() { - RaftEngine engine = RaftEngine.getInstance(); - - // Save originals - originalRaftNode = engine.getRaftNode(); - originalRaftRpcClient = Whitebox.getInternalState(engine, "raftRpcClient"); - originalConfig = Whitebox.getInternalState(engine, "config"); - - // Build mock leader PeerId with real Endpoint - mockLeader = mock(PeerId.class); - Endpoint endpoint = new Endpoint(LEADER_IP, 8610); - when(mockLeader.getEndpoint()).thenReturn(endpoint); - - // Build mock Node that reports itself as follower with a known leader - mockNode = mock(Node.class); - when(mockNode.isLeader(true)).thenReturn(false); - when(mockNode.getLeaderId()).thenReturn(mockLeader); - - // Build mock config - // Use a short default timeout (100ms); specific tests may override getRpcTimeout() - mockConfig = mock(PDConfig.Raft.class); - when(mockConfig.getGrpcAddress()).thenReturn("127.0.0.1:" + GRPC_PORT); - when(mockConfig.getGrpcPort()).thenReturn(GRPC_PORT); - when(mockConfig.getRpcTimeout()).thenReturn(100); - - // Build mock RpcClient - mockRpcClient = mock(RaftRpcClient.class); - - // Inject mocks - Whitebox.setInternalState(engine, "raftNode", mockNode); - Whitebox.setInternalState(engine, "raftRpcClient", mockRpcClient); - Whitebox.setInternalState(engine, "config", mockConfig); - } - - @After - public void tearDown() { - RaftEngine engine = RaftEngine.getInstance(); - Whitebox.setInternalState(engine, "raftNode", originalRaftNode); - Whitebox.setInternalState(engine, "raftRpcClient", originalRaftRpcClient); - Whitebox.setInternalState(engine, "config", originalConfig); - } - - @Test - public void testSuccessReturnsGrpcAddress() throws Exception { - // RPC succeeds and returns a valid gRPC address - RaftRpcProcessor.GetMemberResponse response = - mock(RaftRpcProcessor.GetMemberResponse.class); - when(response.getGrpcAddress()).thenReturn(LEADER_GRPC_ADDRESS); - - CompletableFuture future = - CompletableFuture.completedFuture(response); - when(mockRpcClient.getGrpcAddress(anyString())).thenReturn(future); - - String result = RaftEngine.getInstance().getLeaderGrpcAddress(); - Assert.assertEquals(LEADER_GRPC_ADDRESS, result); - } - - @Test - public void testTimeoutFallsBackToDerivedAddress() throws Exception { - // RPC times out — should fall back to leaderIp:grpcPort - CompletableFuture future = - mock(CompletableFuture.class); - when(future.get(anyLong(), eq(TimeUnit.MILLISECONDS))) - .thenThrow(new TimeoutException("simulated timeout")); - when(mockRpcClient.getGrpcAddress(anyString())).thenReturn(future); - - String result = RaftEngine.getInstance().getLeaderGrpcAddress(); - Assert.assertEquals(LEADER_IP + ":" + GRPC_PORT, result); - } - - @Test - public void testRpcExceptionFallsBackToDerivedAddress() throws Exception { - // RPC throws ExecutionException — should fall back to leaderIp:grpcPort - CompletableFuture future = - mock(CompletableFuture.class); - when(future.get(anyLong(), eq(TimeUnit.MILLISECONDS))) - .thenThrow(new ExecutionException("simulated rpc failure", - new RuntimeException("bolt error"))); - when(mockRpcClient.getGrpcAddress(anyString())).thenReturn(future); - - String result = RaftEngine.getInstance().getLeaderGrpcAddress(); - Assert.assertEquals(LEADER_IP + ":" + GRPC_PORT, result); - } - - @Test - public void testNullResponseFallsBackToDerivedAddress() throws Exception { - // RPC returns null response — should fall back to leaderIp:grpcPort - CompletableFuture future = - CompletableFuture.completedFuture(null); - when(mockRpcClient.getGrpcAddress(anyString())).thenReturn(future); - - String result = RaftEngine.getInstance().getLeaderGrpcAddress(); - Assert.assertEquals(LEADER_IP + ":" + GRPC_PORT, result); - } - - @Test - public void testNullGrpcAddressInResponseFallsBackToDerivedAddress() throws Exception { - // RPC returns a response but grpcAddress field is null — should fall back - RaftRpcProcessor.GetMemberResponse response = - mock(RaftRpcProcessor.GetMemberResponse.class); - when(response.getGrpcAddress()).thenReturn(null); - - CompletableFuture future = - CompletableFuture.completedFuture(response); - when(mockRpcClient.getGrpcAddress(anyString())).thenReturn(future); - - String result = RaftEngine.getInstance().getLeaderGrpcAddress(); - Assert.assertEquals(LEADER_IP + ":" + GRPC_PORT, result); - } - - @Test - public void testNullLeaderAfterWaitThrowsExecutionException() throws Exception { - // Use 0ms timeout so waitingForLeader(0) skips the wait loop and returns immediately - when(mockConfig.getRpcTimeout()).thenReturn(0); - // Leader is still null after waitingForLeader() — should throw ExecutionException - when(mockNode.getLeaderId()).thenReturn(null); - - try { - RaftEngine.getInstance().getLeaderGrpcAddress(); - Assert.fail("Expected ExecutionException"); - } catch (ExecutionException e) { - Assert.assertTrue(e.getCause() instanceof IllegalStateException); - Assert.assertEquals("Leader is not ready", e.getCause().getMessage()); - } - } -} diff --git a/hugegraph-pd/pom.xml b/hugegraph-pd/pom.xml index ceb8af33b2..4af7896bb2 100644 --- a/hugegraph-pd/pom.xml +++ b/hugegraph-pd/pom.xml @@ -44,7 +44,7 @@ 2.17.0 - apache-${release.name}-pd-${project.version} + apache-${release.name}-pd-incubating-${project.version} 3.12.0 4.13.2 diff --git a/hugegraph-server/Dockerfile b/hugegraph-server/Dockerfile index f7613f8485..c9df67dc3f 100644 --- a/hugegraph-server/Dockerfile +++ b/hugegraph-server/Dockerfile @@ -30,7 +30,7 @@ RUN mvn package $MAVEN_ARGS -e -B -ntp -Dmaven.test.skip=true -Dmaven.javadoc.sk # Note: ZGC (The Z Garbage Collector) is only supported on ARM-Mac with java > 13 FROM eclipse-temurin:11-jre-jammy -COPY --from=build /pkg/hugegraph-server/apache-hugegraph-server-*/ /hugegraph-server/ +COPY --from=build /pkg/hugegraph-server/apache-hugegraph-server-incubating-*/ /hugegraph-server/ LABEL maintainer="HugeGraph Docker Maintainers " # TODO: use g1gc or zgc as default diff --git a/hugegraph-server/Dockerfile-hstore b/hugegraph-server/Dockerfile-hstore index 2c6e4b110f..8f7017b6d2 100644 --- a/hugegraph-server/Dockerfile-hstore +++ b/hugegraph-server/Dockerfile-hstore @@ -30,7 +30,7 @@ RUN mvn package $MAVEN_ARGS -e -B -ntp -DskipTests -Dmaven.javadoc.skip=true && # Note: ZGC (The Z Garbage Collector) is only supported on ARM-Mac with java > 13 FROM eclipse-temurin:11-jre-jammy -COPY --from=build /pkg/hugegraph-server/apache-hugegraph-server-*/ /hugegraph-server/ +COPY --from=build /pkg/hugegraph-server/apache-hugegraph-server-incubating-*/ /hugegraph-server/ # remove hugegraph.properties and rename hstore.properties.template for default hstore backend RUN cd /hugegraph-server/conf/graphs \ && rm hugegraph.properties && mv hstore.properties.template hugegraph.properties @@ -62,7 +62,7 @@ RUN set -x \ # 2. Init docker script COPY hugegraph-server/hugegraph-dist/docker/scripts/remote-connect.groovy ./scripts -#COPY hugegraph-server/hugegraph-dist/docker/scripts/detect-storage.groovy ./scripts +COPY hugegraph-server/hugegraph-dist/docker/scripts/detect-storage.groovy ./scripts COPY hugegraph-server/hugegraph-dist/docker/docker-entrypoint.sh . RUN chmod 755 ./docker-entrypoint.sh diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/API.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/API.java index 3220cf6b02..c476864711 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/API.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/API.java @@ -86,8 +86,6 @@ public class API { MetricsUtil.registerMeter(API.class, "expected-error"); private static final Meter unknownErrorMeter = MetricsUtil.registerMeter(API.class, "unknown-error"); - private static final String STANDALONE_ERROR = - "GraphSpace management is not supported in standalone mode"; public static HugeGraph graph(GraphManager manager, String graphSpace, String graph) { @@ -243,20 +241,6 @@ public static boolean checkAndParseAction(String action) { } } - /** - * Ensures the graph manager is available and PD mode is enabled. - * - * @param manager the graph manager of current request - * @throws IllegalArgumentException if the graph manager is null - * @throws HugeException if PD mode is disabled - */ - protected static void ensurePdModeEnabled(GraphManager manager) { - E.checkArgumentNotNull(manager, "Graph manager can't be null"); - if (!manager.isPDEnabled()) { - throw new HugeException(STANDALONE_ERROR); - } - } - public static boolean hasAdminPerm(GraphManager manager, String user) { return manager.authManager().isAdminManager(user); } diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/AccessAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/AccessAPI.java index 35b05eedb1..8fc8f04442 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/AccessAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/AccessAPI.java @@ -35,8 +35,6 @@ import com.fasterxml.jackson.annotation.JsonIgnoreProperties; import com.fasterxml.jackson.annotation.JsonProperty; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.Consumes; @@ -64,7 +62,6 @@ public class AccessAPI extends API { @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) public String create(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, JsonAccess jsonAccess) { LOG.debug("GraphSpace [{}] create access: {}", graphSpace, jsonAccess); @@ -81,9 +78,7 @@ public String create(@Context GraphManager manager, @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) public String update(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The access id") @PathParam("id") String id, JsonAccess jsonAccess) { LOG.debug("GraphSpace [{}] update access: {}", graphSpace, jsonAccess); @@ -104,13 +99,9 @@ public String update(@Context GraphManager manager, @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) public String list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The group id to filter by") @QueryParam("group") String group, - @Parameter(description = "The target id to filter by") @QueryParam("target") String target, - @Parameter(description = "The limit of results to return") @QueryParam("limit") @DefaultValue("100") long limit) { LOG.debug("GraphSpace [{}] list accesses by group {} or target {}", graphSpace, group, target); @@ -135,9 +126,7 @@ public String list(@Context GraphManager manager, @Path("{id}") @Produces(APPLICATION_JSON_WITH_CHARSET) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The access id") @PathParam("id") String id) { LOG.debug("GraphSpace [{}] get access: {}", graphSpace, id); @@ -150,9 +139,7 @@ public String get(@Context GraphManager manager, @Path("{id}") @Consumes(APPLICATION_JSON) public void delete(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The access id") @PathParam("id") String id) { LOG.debug("GraphSpace [{}] delete access: {}", graphSpace, id); @@ -168,16 +155,12 @@ public void delete(@Context GraphManager manager, private static class JsonAccess implements Checkable { @JsonProperty("group") - @Schema(description = "The group id", required = true) private String group; @JsonProperty("target") - @Schema(description = "The target id", required = true) private String target; @JsonProperty("access_permission") - @Schema(description = "The access permission", required = true) private HugePermission permission; @JsonProperty("access_description") - @Schema(description = "The access description") private String description; public HugeAccess build(HugeAccess access) { diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/BelongAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/BelongAPI.java index 09af7f51c9..1064802e29 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/BelongAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/BelongAPI.java @@ -34,8 +34,6 @@ import com.fasterxml.jackson.annotation.JsonIgnoreProperties; import com.fasterxml.jackson.annotation.JsonProperty; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.Consumes; @@ -63,7 +61,6 @@ public class BelongAPI extends API { @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) public String create(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, JsonBelong jsonBelong) { LOG.debug("GraphSpace [{}] create belong: {}", graphSpace, jsonBelong); @@ -80,9 +77,7 @@ public String create(@Context GraphManager manager, @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) public String update(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The belong id") @PathParam("id") String id, JsonBelong jsonBelong) { LOG.debug("GraphSpace [{}] update belong: {}", graphSpace, jsonBelong); @@ -103,13 +98,9 @@ public String update(@Context GraphManager manager, @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) public String list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The user id to filter by") @QueryParam("user") String user, - @Parameter(description = "The group id to filter by") @QueryParam("group") String group, - @Parameter(description = "The limit of results to return") @QueryParam("limit") @DefaultValue("100") long limit) { LOG.debug("GraphSpace [{}] list belongs by user {} or group {}", graphSpace, user, group); @@ -134,9 +125,7 @@ public String list(@Context GraphManager manager, @Path("{id}") @Produces(APPLICATION_JSON_WITH_CHARSET) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The belong id") @PathParam("id") String id) { LOG.debug("GraphSpace [{}] get belong: {}", graphSpace, id); @@ -149,9 +138,7 @@ public String get(@Context GraphManager manager, @Path("{id}") @Consumes(APPLICATION_JSON) public void delete(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The belong id") @PathParam("id") String id) { LOG.debug("GraphSpace [{}] delete belong: {}", graphSpace, id); @@ -167,13 +154,10 @@ public void delete(@Context GraphManager manager, private static class JsonBelong implements Checkable { @JsonProperty("user") - @Schema(description = "The user id", required = true) private String user; @JsonProperty("group") - @Schema(description = "The group id", required = true) private String group; @JsonProperty("belong_description") - @Schema(description = "The belong description") private String description; public HugeBelong build(HugeBelong belong) { diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/GroupAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/GroupAPI.java index ae13beb4a6..2786ef0b6d 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/GroupAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/GroupAPI.java @@ -34,8 +34,6 @@ import com.fasterxml.jackson.annotation.JsonIgnoreProperties; import com.fasterxml.jackson.annotation.JsonProperty; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -81,7 +79,6 @@ public String create(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"admin"}) public String update(@Context GraphManager manager, - @Parameter(description = "The group id") @PathParam("id") String id, JsonGroup jsonGroup) { LOG.debug("update group: {}", jsonGroup); @@ -103,7 +100,6 @@ public String update(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"admin"}) public String list(@Context GraphManager manager, - @Parameter(description = "The limit of results to return") @QueryParam("limit") @DefaultValue("100") long limit) { LOG.debug("list groups"); @@ -117,7 +113,6 @@ public String list(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"admin"}) public String get(@Context GraphManager manager, - @Parameter(description = "The group id") @PathParam("id") String id) { LOG.debug("get group: {}", id); @@ -131,7 +126,6 @@ public String get(@Context GraphManager manager, @Consumes(APPLICATION_JSON) @RolesAllowed({"admin"}) public void delete(@Context GraphManager manager, - @Parameter(description = "The group id") @PathParam("id") String id) { LOG.debug("delete group: {}", id); @@ -147,10 +141,8 @@ public void delete(@Context GraphManager manager, private static class JsonGroup implements Checkable { @JsonProperty("group_name") - @Schema(description = "The group name", required = true) private String name; @JsonProperty("group_description") - @Schema(description = "The group description") private String description; public HugeGroup build(HugeGroup group) { diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/LoginAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/LoginAPI.java index faf62c4064..7086b77af2 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/LoginAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/LoginAPI.java @@ -35,7 +35,6 @@ import com.fasterxml.jackson.annotation.JsonProperty; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.BadRequestException; @@ -126,13 +125,10 @@ public String verifyToken(@Context GraphManager manager, private static class JsonLogin implements Checkable { @JsonProperty("user_name") - @Schema(description = "The user name") private String name; @JsonProperty("user_password") - @Schema(description = "The user password") private String password; @JsonProperty("token_expire") - @Schema(description = "Token expiration time in seconds") private long expire; @Override diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/ManagerAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/ManagerAPI.java index 071e4b8a66..80b91d2731 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/ManagerAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/ManagerAPI.java @@ -37,8 +37,6 @@ import com.fasterxml.jackson.annotation.JsonProperty; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.Consumes; @@ -64,11 +62,9 @@ public class ManagerAPI extends API { @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) public String createManager(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, JsonManager jsonManager) { LOG.debug("Create manager: {}", jsonManager); - ensurePdModeEnabled(manager); String user = jsonManager.user; HugePermission type = jsonManager.type; // graphSpace now comes from @PathParam instead of JsonManager @@ -117,14 +113,10 @@ public String createManager(@Context GraphManager manager, @Timed @Consumes(APPLICATION_JSON) public void delete(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The user name") @QueryParam("user") String user, - @Parameter(description = "The manager type: SPACE, SPACE_MEMBER, or ADMIN") @QueryParam("type") HugePermission type) { LOG.debug("Delete graph manager: {} {} {}", user, type, graphSpace); - ensurePdModeEnabled(manager); E.checkArgument(!"admin".equals(user) || type != HugePermission.ADMIN, "User 'admin' can't be removed from ADMIN"); @@ -165,12 +157,10 @@ public void delete(@Context GraphManager manager, @Timed @Consumes(APPLICATION_JSON) public String list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The manager type: SPACE, SPACE_MEMBER or ADMIN") @QueryParam("type") HugePermission type) { LOG.debug("list graph manager: {} {}", type, graphSpace); - ensurePdModeEnabled(manager); + AuthManager authManager = manager.authManager(); validType(type); List adminManagers; @@ -197,13 +187,10 @@ public String list(@Context GraphManager manager, @Path("check") @Consumes(APPLICATION_JSON) public String checkRole(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The manager type: " + - "SPACE, SPACE_MEMBER, or ADMIN") @QueryParam("type") HugePermission type) { LOG.debug("check if current user is graph manager: {} {}", type, graphSpace); - ensurePdModeEnabled(manager); + validType(type); AuthManager authManager = manager.authManager(); String user = HugeGraphAuthProxy.username(); @@ -232,12 +219,9 @@ public String checkRole(@Context GraphManager manager, @Path("role") @Consumes(APPLICATION_JSON) public String getRolesInGs(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The user name") @QueryParam("user") - String user) { + @QueryParam("user") String user) { LOG.debug("get user [{}]'s role in graph space [{}]", user, graphSpace); - ensurePdModeEnabled(manager); AuthManager authManager = manager.authManager(); List result = new ArrayList<>(); validGraphSpace(manager, graphSpace); @@ -280,10 +264,8 @@ private void validGraphSpace(GraphManager manager, String graphSpace) { private static class JsonManager implements Checkable { @JsonProperty("user") - @Schema(description = "The user or group name", required = true) private String user; @JsonProperty("type") - @Schema(description = "The manager type: SPACE, SPACE_MEMBER, or ADMIN", required = true) private HugePermission type; @Override diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/ProjectAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/ProjectAPI.java index 4380093ba0..229903c137 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/ProjectAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/ProjectAPI.java @@ -39,8 +39,6 @@ import com.fasterxml.jackson.annotation.JsonIgnoreProperties; import com.fasterxml.jackson.annotation.JsonProperty; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.Consumes; @@ -70,7 +68,6 @@ public class ProjectAPI extends API { @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) public String create(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, JsonProject jsonProject) { LOG.debug("GraphSpace [{}] create project: {}", graphSpace, jsonProject); @@ -92,15 +89,8 @@ public String create(@Context GraphManager manager, @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) public String update(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The project id") @PathParam("id") String id, - @Parameter( - description = "The action to perform: " + - "add_graph, remove_graph, " + - "or empty for description " + - "update") @QueryParam("action") String action, JsonProject jsonProject) { LOG.debug("GraphSpace [{}] update {} project: {}", graphSpace, action, @@ -136,9 +126,7 @@ public String update(@Context GraphManager manager, @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) public String list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The limit of results to return") @QueryParam("limit") @DefaultValue("100") long limit) { LOG.debug("GraphSpace [{}] list project", graphSpace); @@ -152,9 +140,7 @@ public String list(@Context GraphManager manager, @Path("{id}") @Produces(APPLICATION_JSON_WITH_CHARSET) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The project id") @PathParam("id") String id) { LOG.debug("GraphSpace [{}] get project: {}", graphSpace, id); @@ -172,9 +158,7 @@ public String get(@Context GraphManager manager, @Path("{id}") @Consumes(APPLICATION_JSON) public void delete(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The project id") @PathParam("id") String id) { LOG.debug("GraphSpace [{}] delete project: {}", graphSpace, id); @@ -200,13 +184,10 @@ public static boolean isRemoveGraph(String action) { private static class JsonProject implements Checkable { @JsonProperty("project_name") - @Schema(description = "The project name", required = true) private String name; @JsonProperty("project_graphs") - @Schema(description = "Set of graph names associated with the project") private Set graphs; @JsonProperty("project_description") - @Schema(description = "The project description") private String description; public HugeProject build() { diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/TargetAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/TargetAPI.java index 7f673048dc..d59023f871 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/TargetAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/TargetAPI.java @@ -35,8 +35,6 @@ import com.fasterxml.jackson.annotation.JsonIgnoreProperties; import com.fasterxml.jackson.annotation.JsonProperty; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.Consumes; @@ -64,7 +62,6 @@ public class TargetAPI extends API { @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) public String create(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, JsonTarget jsonTarget) { LOG.debug("GraphSpace [{}] create target: {}", graphSpace, jsonTarget); @@ -81,9 +78,7 @@ public String create(@Context GraphManager manager, @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) public String update(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The target id") @PathParam("id") String id, JsonTarget jsonTarget) { LOG.debug("GraphSpace [{}] update target: {}", graphSpace, jsonTarget); @@ -104,9 +99,7 @@ public String update(@Context GraphManager manager, @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) public String list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The limit of results to return") @QueryParam("limit") @DefaultValue("100") long limit) { LOG.debug("GraphSpace [{}] list targets", graphSpace); @@ -119,9 +112,7 @@ public String list(@Context GraphManager manager, @Path("{id}") @Produces(APPLICATION_JSON_WITH_CHARSET) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The target id") @PathParam("id") String id) { LOG.debug("GraphSpace [{}] get target: {}", graphSpace, id); @@ -134,9 +125,7 @@ public String get(@Context GraphManager manager, @Path("{id}") @Consumes(APPLICATION_JSON) public void delete(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The target id") @PathParam("id") String id) { LOG.debug("GraphSpace [{}] delete target: {}", graphSpace, id); @@ -152,16 +141,12 @@ public void delete(@Context GraphManager manager, private static class JsonTarget implements Checkable { @JsonProperty("target_name") - @Schema(description = "The target name", required = true) private String name; @JsonProperty("target_graph") - @Schema(description = "The target graph name", required = true) private String graph; @JsonProperty("target_url") - @Schema(description = "The target URL", required = true) private String url; @JsonProperty("target_resources") // error when List - @Schema(description = "The target resources") private List> resources; public HugeTarget build(HugeTarget target) { @@ -198,6 +183,7 @@ public String toString() { '}'; } + @Override public void checkCreate(boolean isBatch) { E.checkArgumentNotNull(this.name, diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/UserAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/UserAPI.java index de51e6955d..88fd608021 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/UserAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/auth/UserAPI.java @@ -37,8 +37,6 @@ import com.fasterxml.jackson.annotation.JsonIgnoreProperties; import com.fasterxml.jackson.annotation.JsonProperty; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.Consumes; @@ -66,7 +64,6 @@ public class UserAPI extends API { @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) public String create(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, JsonUser jsonUser) { LOG.debug("GraphSpace [{}] create user: {}", graphSpace, jsonUser); @@ -83,9 +80,7 @@ public String create(@Context GraphManager manager, @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) public String update(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The user id") @PathParam("id") String id, JsonUser jsonUser) { LOG.debug("GraphSpace [{}] update user: {}", graphSpace, jsonUser); @@ -106,9 +101,7 @@ public String update(@Context GraphManager manager, @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) public String list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The limit of results to return") @QueryParam("limit") @DefaultValue("100") long limit) { LOG.debug("GraphSpace [{}] list users", graphSpace); @@ -121,9 +114,7 @@ public String list(@Context GraphManager manager, @Path("{id}") @Produces(APPLICATION_JSON_WITH_CHARSET) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The user id") @PathParam("id") String id) { LOG.debug("GraphSpace [{}] get user: {}", graphSpace, id); @@ -136,9 +127,7 @@ public String get(@Context GraphManager manager, @Path("{id}/role") @Produces(APPLICATION_JSON_WITH_CHARSET) public String role(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The user id") @PathParam("id") String id) { LOG.debug("GraphSpace [{}] get user role: {}", graphSpace, id); @@ -151,9 +140,7 @@ public String role(@Context GraphManager manager, @Path("{id}") @Consumes(APPLICATION_JSON) public void delete(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The user id") @PathParam("id") String id) { LOG.debug("GraphSpace [{}] delete user: {}", graphSpace, id); @@ -173,22 +160,16 @@ protected static Id parseId(String id) { private static class JsonUser implements Checkable { @JsonProperty("user_name") - @Schema(description = "The user name", required = true) private String name; @JsonProperty("user_password") - @Schema(description = "The user password", required = true) private String password; @JsonProperty("user_phone") - @Schema(description = "The user phone number") private String phone; @JsonProperty("user_email") - @Schema(description = "The user email address") private String email; @JsonProperty("user_avatar") - @Schema(description = "The user avatar URL") private String avatar; @JsonProperty("user_description") - @Schema(description = "The user description") private String description; public HugeUser build(HugeUser user) { diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/cypher/CypherAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/cypher/CypherAPI.java index a30bff73b8..e8f760140a 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/cypher/CypherAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/cypher/CypherAPI.java @@ -33,7 +33,6 @@ import com.codahale.metrics.annotation.Timed; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.Consumes; @@ -73,11 +72,8 @@ private CypherManager cypherManager() { @CompressInterceptor.Compress(buffer = (1024 * 40)) @Produces(APPLICATION_JSON_WITH_CHARSET) public CypherModel query(@Context HttpHeaders headers, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphspace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The cypher query string") @QueryParam("cypher") String cypher) { return this.queryByCypher(headers, graphspace, graph, cypher); @@ -90,11 +86,8 @@ public CypherModel query(@Context HttpHeaders headers, @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) public CypherModel post(@Context HttpHeaders headers, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphspace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The cypher query string") String cypher) { return this.queryByCypher(headers, graphspace, graph, cypher); diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/cypher/CypherModel.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/cypher/CypherModel.java index e7c3900605..cd5a769237 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/cypher/CypherModel.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/cypher/CypherModel.java @@ -21,20 +21,13 @@ import java.util.List; import java.util.Map; -import io.swagger.v3.oas.annotations.media.Schema; - /** * As same as response of GremlinAPI */ public class CypherModel { - @Schema(description = "The request ID") public String requestId; - - @Schema(description = "The response status") public Status status = new Status(); - - @Schema(description = "The query result") public Result result = new Result(); public static CypherModel dataOf(String requestId, List data) { @@ -58,22 +51,14 @@ private CypherModel() { public static class Status { - @Schema(description = "The status message") public String message = ""; - - @Schema(description = "The status code") public int code; - - @Schema(description = "Additional status attributes") public Map attributes = Collections.EMPTY_MAP; } private static class Result { - @Schema(description = "The result data list") public List data; - - @Schema(description = "The result metadata") public Map meta = Collections.EMPTY_MAP; } diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/graph/BatchAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/graph/BatchAPI.java index 85beb142db..2ba95e5bc9 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/graph/BatchAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/graph/BatchAPI.java @@ -40,8 +40,6 @@ import com.fasterxml.jackson.annotation.JsonIgnoreProperties; import com.fasterxml.jackson.annotation.JsonProperty; -import io.swagger.v3.oas.annotations.media.Schema; - public class BatchAPI extends API { private static final Logger LOG = Log.logger(BatchAPI.class); @@ -80,20 +78,14 @@ public R commit(HugeConfig config, HugeGraph g, int size, } @JsonIgnoreProperties(value = {"type"}) - @Schema(description = "Base class for vertex/edge in batch operations") protected abstract static class JsonElement implements Checkable { - @Schema(description = "The vertex/edge ID. If not specified, " + - "it will be automatically generated based on ID strategy") @JsonProperty("id") public Object id; - @Schema(description = "The vertex/edge label") @JsonProperty("label") public String label; - @Schema(description = "The properties of the vertex/edge in key-value format") @JsonProperty("properties") public Map properties; - @Schema(description = "The type of element (vertex or edge)", hidden = true) @JsonProperty("type") public String type; diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/graph/EdgeAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/graph/EdgeAPI.java index 1f229cd6b1..4afc2fec97 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/graph/EdgeAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/graph/EdgeAPI.java @@ -57,8 +57,6 @@ import com.codahale.metrics.annotation.Timed; import com.fasterxml.jackson.annotation.JsonProperty; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -87,11 +85,9 @@ public class EdgeAPI extends BatchAPI { @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + - "$action=edge_write"}) + "$action=edge_write"}) public String create(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, JsonEdge jsonEdge) { LOG.debug("Graph [{}] create edge: {}", graph, jsonEdge); @@ -129,14 +125,11 @@ public String create(@Context GraphManager manager, @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + - "$action=edge_write"}) + "$action=edge_write"}) public String create(@Context HugeConfig config, @Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "Whether to check if target vertices exist") @QueryParam("check_vertex") @DefaultValue("true") boolean checkVertex, List jsonEdges) { @@ -176,12 +169,10 @@ public String create(@Context HugeConfig config, @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + - "$action=edge_write"}) + "$action=edge_write"}) public String update(@Context HugeConfig config, @Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, BatchEdgeRequest req) { BatchEdgeRequest.checkUpdate(req); @@ -232,15 +223,11 @@ public String update(@Context HugeConfig config, @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + - "$action=edge_write"}) + "$action=edge_write"}) public String update(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The edge ID") @PathParam("id") String id, - @Parameter(description = "Action to perform: 'append' or 'remove'") @QueryParam("action") String action, JsonEdge jsonEdge) { LOG.debug("Graph [{}] update edge: {}", graph, jsonEdge); @@ -276,29 +263,18 @@ public String update(@Context GraphManager manager, @Compress @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + - "$action=edge_read"}) + "$action=edge_read"}) public String list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The vertex ID to query edges. " + - "If not specified, query all edges") @QueryParam("vertex_id") String vertexId, - @Parameter(description = "The direction of edges: BOTH, IN, or OUT") @QueryParam("direction") String direction, - @Parameter(description = "Filter by edge label") @QueryParam("label") String label, - @Parameter(description = "Filter by edge properties in JSON format") @QueryParam("properties") String properties, - @Parameter(description = "Keep the starting predicate P in property query") @QueryParam("keep_start_p") @DefaultValue("false") boolean keepStartP, - @Parameter(description = "Offset for pagination") @QueryParam("offset") @DefaultValue("0") long offset, - @Parameter(description = "Page number for pagination") @QueryParam("page") String page, - @Parameter(description = "Limit the number of edges returned") @QueryParam("limit") @DefaultValue("100") long limit) { LOG.debug("Graph [{}] query edges by vertex: {}, direction: {}, " + "label: {}, properties: {}, offset: {}, page: {}, limit: {}", @@ -366,13 +342,10 @@ public String list(@Context GraphManager manager, @Path("{id}") @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + - "$action=edge_read"}) + "$action=edge_read"}) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The edge ID") @PathParam("id") String id) { LOG.debug("Graph [{}] get edge by id '{}'", graph, id); @@ -392,15 +365,11 @@ public String get(@Context GraphManager manager, @Path("{id}") @Consumes(APPLICATION_JSON) @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + - "$action=edge_delete"}) + "$action=edge_delete"}) public void delete(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The edge ID") @PathParam("id") String id, - @Parameter(description = "The edge label (used to verify edge identity)") @QueryParam("label") String label) { LOG.debug("Graph [{}] remove vertex by id '{}'", graph, id); @@ -516,16 +485,12 @@ private Id getEdgeId(HugeGraph g, JsonEdge newEdge) { protected static class BatchEdgeRequest { - @Schema(description = "List of edges to be created or updated", required = true) @JsonProperty("edges") public List jsonEdges; - @Schema(description = "Update strategies for each property key", required = true) @JsonProperty("update_strategies") public Map updateStrategies; - @Schema(description = "Whether to check if source/target vertices exist") @JsonProperty("check_vertex") public boolean checkVertex = false; - @Schema(description = "Whether to create edge if it does not exist") @JsonProperty("create_if_not_exist") public boolean createIfNotExist = true; @@ -550,16 +515,12 @@ public String toString() { private static class JsonEdge extends JsonElement { - @Schema(description = "The source vertex ID", required = true) @JsonProperty("outV") public Object source; - @Schema(description = "The source vertex label", required = true) @JsonProperty("outVLabel") public String sourceLabel; - @Schema(description = "The target vertex ID", required = true) @JsonProperty("inV") public Object target; - @Schema(description = "The target vertex label", required = true) @JsonProperty("inVLabel") public String targetLabel; diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/graph/VertexAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/graph/VertexAPI.java index af1433ac46..0f24a5ec46 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/graph/VertexAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/graph/VertexAPI.java @@ -56,8 +56,6 @@ import com.codahale.metrics.annotation.Timed; import com.fasterxml.jackson.annotation.JsonProperty; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -87,9 +85,7 @@ public class VertexAPI extends BatchAPI { @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$owner=$graph $action=vertex_write"}) public String create(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, JsonVertex jsonVertex) { LOG.debug("Graph [{}] create vertex: {}", graph, jsonVertex); @@ -111,9 +107,7 @@ public String create(@Context GraphManager manager, @RolesAllowed({"space_member", "$owner=$graph $action=vertex_write"}) public String create(@Context HugeConfig config, @Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, List jsonVertices) { LOG.debug("Graph [{}] create vertices: {}", graph, jsonVertices); @@ -146,9 +140,7 @@ public String create(@Context HugeConfig config, @RolesAllowed({"space_member", "$owner=$graph $action=vertex_write"}) public String update(@Context HugeConfig config, @Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, BatchVertexRequest req) { BatchVertexRequest.checkUpdate(req); @@ -197,15 +189,9 @@ public String update(@Context HugeConfig config, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$owner=$graph $action=vertex_write"}) public String update(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The vertex ID") @PathParam("id") String idValue, - @Parameter(description = - "Action to perform: 'append' to add new properties, " + - "'remove' to delete existing properties") @QueryParam("action") String action, JsonVertex jsonVertex) { LOG.debug("Graph [{}] update vertex: {}", graph, jsonVertex); @@ -239,25 +225,14 @@ public String update(@Context GraphManager manager, @RolesAllowed({"space", "$graphspace=$graphspace $owner=$graph " + "$action=vertex_read"}) public String list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "Filter by vertex label") @QueryParam("label") String label, - @Parameter(description = "Filter by vertex properties in JSON format, " + - "e.g., {\"key\":\"value\"}") @QueryParam("properties") String properties, - @Parameter(description = - "Keep the starting predicate P (like P.gt(), P.lt()) " + - "in property query or parse it to relational operators") @QueryParam("keep_start_p") @DefaultValue("false") boolean keepStartP, - @Parameter(description = "Offset for pagination") @QueryParam("offset") @DefaultValue("0") long offset, - @Parameter(description = "Page number for pagination") @QueryParam("page") String page, - @Parameter(description = "Limit the number of vertices returned") @QueryParam("limit") @DefaultValue("100") long limit) { LOG.debug("Graph [{}] query vertices by label: {}, properties: {}, " + "offset: {}, page: {}, limit: {}", @@ -311,11 +286,8 @@ public String list(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$owner=$graph $action=vertex_read"}) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The vertex ID") @PathParam("id") String idValue) { LOG.debug("Graph [{}] get vertex by id '{}'", graph, idValue); @@ -337,13 +309,9 @@ public String get(@Context GraphManager manager, @Consumes(APPLICATION_JSON) @RolesAllowed({"space_member", "$owner=$graph $action=vertex_delete"}) public void delete(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The vertex ID") @PathParam("id") String idValue, - @Parameter(description = "The vertex label (used to verify vertex identity)") @QueryParam("label") String label) { LOG.debug("Graph [{}] remove vertex by id '{}'", graph, idValue); @@ -422,13 +390,10 @@ private static Id getVertexId(HugeGraph g, JsonVertex vertex) { private static class BatchVertexRequest { - @Schema(description = "List of vertices to be created or updated", required = true) @JsonProperty("vertices") public List jsonVertices; - @Schema(description = "Update strategies for each property key", required = true) @JsonProperty("update_strategies") public Map updateStrategies; - @Schema(description = "Whether to create vertex if it does not exist") @JsonProperty("create_if_not_exist") public boolean createIfNotExist = true; diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/gremlin/GremlinAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/gremlin/GremlinAPI.java index c1701ea943..110a3ef5b8 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/gremlin/GremlinAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/gremlin/GremlinAPI.java @@ -25,7 +25,6 @@ import com.codahale.metrics.Histogram; import com.codahale.metrics.annotation.Timed; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.Consumes; @@ -56,7 +55,6 @@ public class GremlinAPI extends GremlinQueryAPI { @Produces(APPLICATION_JSON_WITH_CHARSET) public Response post(@Context HugeConfig conf, @Context HttpHeaders headers, - @Parameter(description = "The Gremlin query request body") String request) { /* The following code is reserved for forwarding request */ // context.getRequestDispatcher(location).forward(request, response); diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/AlgorithmAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/AlgorithmAPI.java index 79933bc371..82c0611f5f 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/AlgorithmAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/AlgorithmAPI.java @@ -36,7 +36,6 @@ import com.codahale.metrics.annotation.Timed; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.Consumes; @@ -62,13 +61,9 @@ public class AlgorithmAPI extends API { @Produces(APPLICATION_JSON_WITH_CHARSET) @RedirectFilter.RedirectMasterRole public Map post(@Context GraphManager manager, - @Parameter(description = "The graphspace name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The algorithm name") @PathParam("name") String algorithm, - @Parameter(description = "The algorithm parameters") Map parameters) { LOG.debug("Graph [{}] schedule algorithm job: {}", graph, parameters); E.checkArgument(algorithm != null && !algorithm.isEmpty(), diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/ComputerAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/ComputerAPI.java index d5188385cc..3e88f8ccb6 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/ComputerAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/ComputerAPI.java @@ -36,7 +36,6 @@ import com.codahale.metrics.annotation.Timed; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.Consumes; @@ -62,13 +61,9 @@ public class ComputerAPI extends API { @Produces(APPLICATION_JSON_WITH_CHARSET) @RedirectFilter.RedirectMasterRole public Map post(@Context GraphManager manager, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The graphspace name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The computer name") @PathParam("name") String computer, - @Parameter(description = "The computer parameters") Map parameters) { LOG.debug("Graph [{}] schedule computer job: {}", graph, parameters); E.checkArgument(computer != null && !computer.isEmpty(), diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/GremlinAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/GremlinAPI.java index 779cf19b66..2b28364b26 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/GremlinAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/GremlinAPI.java @@ -46,8 +46,6 @@ import com.fasterxml.jackson.annotation.JsonProperty; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -79,11 +77,8 @@ public class GremlinAPI extends API { "$action=gremlin_execute"}) @RedirectFilter.RedirectMasterRole public Map post(@Context GraphManager manager, - @Parameter(description = "The graphspace name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The Gremlin job request") GremlinRequest request) { LOG.debug("Graph [{}] schedule gremlin job: {}", graph, request); checkCreatingBody(request); @@ -104,16 +99,12 @@ public static class GremlinRequest implements Checkable { // See org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer @JsonProperty - @Schema(description = "The Gremlin script to execute", required = true) private String gremlin; @JsonProperty - @Schema(description = "The bindings for the Gremlin script") private Map bindings = new HashMap<>(); @JsonProperty - @Schema(description = "The language of the Gremlin script", example = "gremlin-groovy") private String language = "gremlin-groovy"; @JsonProperty - @Schema(description = "The aliases for graph references") private Map aliases = new HashMap<>(); public String gremlin() { diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/RebuildAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/RebuildAPI.java index 3219c8b3ab..35e0d2cadc 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/RebuildAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/RebuildAPI.java @@ -31,7 +31,6 @@ import com.codahale.metrics.annotation.Timed; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -57,12 +56,9 @@ public class RebuildAPI extends API { "$action=index_label_write"}) @RedirectFilter.RedirectMasterRole public Map vertexLabelRebuild(@Context GraphManager manager, - @Parameter(description = "The graphspace name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The vertex label to rebuild") @PathParam("name") String name) { LOG.debug("Graph [{}] rebuild vertex label: {}", graph, name); @@ -79,12 +75,9 @@ public Map vertexLabelRebuild(@Context GraphManager manager, @RolesAllowed({"space", "$graphspace=$graphspace $owner=$graph " + "$action=index_label_write"}) public Map edgeLabelRebuild(@Context GraphManager manager, - @Parameter(description = "The graphspace name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The edge label name to rebuild") @PathParam("name") String name) { LOG.debug("Graph [{}] rebuild edge label: {}", graph, name); @@ -102,12 +95,9 @@ public Map edgeLabelRebuild(@Context GraphManager manager, "$action=index_label_write"}) @RedirectFilter.RedirectMasterRole public Map indexLabelRebuild(@Context GraphManager manager, - @Parameter(description = "The graphspace name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The index label name to rebuild") @PathParam("name") String name) { LOG.debug("Graph [{}] rebuild index label: {}", graph, name); diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/TaskAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/TaskAPI.java index d35cc9a955..151d3356e8 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/TaskAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/job/TaskAPI.java @@ -41,7 +41,6 @@ import com.codahale.metrics.annotation.Timed; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.BadRequestException; @@ -70,18 +69,12 @@ public class TaskAPI extends API { @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) public Map list(@Context GraphManager manager, - @Parameter(description = "The graphspace name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The task status to filter") @QueryParam("status") String status, - @Parameter(description = "The task ids to filter") @QueryParam("ids") List ids, - @Parameter(description = "The maximum number of tasks") @QueryParam("limit") @DefaultValue("100") long limit, - @Parameter(description = "The page token for pagination") @QueryParam("page") String page) { LOG.debug("Graph [{}] list tasks with status {}, ids {}, " + "limit {}, page {}", graph, status, ids, limit, page); @@ -131,11 +124,8 @@ public Map list(@Context GraphManager manager, @Path("{id}") @Produces(APPLICATION_JSON_WITH_CHARSET) public Map get(@Context GraphManager manager, - @Parameter(description = "The graphspace name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The task id") @PathParam("id") long id) { LOG.debug("Graph [{}] get task: {}", graph, id); @@ -149,13 +139,9 @@ public Map get(@Context GraphManager manager, @Path("{id}") @RedirectFilter.RedirectMasterRole public void delete(@Context GraphManager manager, - @Parameter(description = "The graphspace name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The task id") @PathParam("id") long id, - @Parameter(description = "Force delete the task even if it's running") @DefaultValue("false") @QueryParam("force") boolean force) { LOG.debug("Graph [{}] delete task: {}", graph, id); @@ -172,14 +158,10 @@ public void delete(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RedirectFilter.RedirectMasterRole public Map update(@Context GraphManager manager, - @Parameter(description = "The graphspace name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The task id") @PathParam("id") long id, - @Parameter(description = "The action to perform on the task") @QueryParam("action") String action) { LOG.debug("Graph [{}] cancel task: {}", graph, id); diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/metrics/MetricsAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/metrics/MetricsAPI.java index b457b66bfc..c6c6e8c962 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/metrics/MetricsAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/metrics/MetricsAPI.java @@ -71,7 +71,6 @@ import com.codahale.metrics.annotation.Timed; import io.swagger.v3.oas.annotations.Operation; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -194,8 +193,6 @@ public String timers() { @RolesAllowed({"space", "$owner= $action=metrics_read"}) @Operation(summary = "get all base metrics") public String all(@Context GraphManager manager, - @Parameter(description = "Output format type: 'json' for JSON format, " + - "other values for Prometheus format") @QueryParam("type") String type) { if (type != null && type.equals(JSON_STR)) { return baseMetricAll(); @@ -210,9 +207,7 @@ public String all(@Context GraphManager manager, @Produces(APPLICATION_TEXT_WITH_CHARSET) @RolesAllowed({"space", "$owner= $action=metrics_read"}) @Operation(summary = "get all statistics metrics") - public String statistics( - @Parameter(description = "Output format type: 'json' for JSON format") - @QueryParam("type") String type) { + public String statistics(@QueryParam("type") String type) { Map> metricMap = statistics(); if (type != null && type.equals(JSON_STR)) { diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/profile/GraphsAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/profile/GraphsAPI.java index 9316d7341b..b7839ce053 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/profile/GraphsAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/profile/GraphsAPI.java @@ -45,7 +45,6 @@ import com.codahale.metrics.annotation.Timed; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -89,7 +88,6 @@ private static Map convConfig(Map config) { @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$dynamic"}) public Object list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, @Context SecurityContext sc) { LOG.debug("List graphs in graph space {}", graphSpace); @@ -98,13 +96,13 @@ public Object list(@Context GraphManager manager, } Set graphs = manager.graphs(graphSpace); LOG.debug("Get graphs list from graph manager with size {}", - graphs.size()); + graphs.size()); // Filter by user role Set filterGraphs = new HashSet<>(); for (String graph : graphs) { LOG.debug("Get graph {} and verify auth", graph); String role = RequiredPerm.roleFor(graphSpace, graph, - HugePermission.READ); + HugePermission.READ); if (sc.isUserInRole(role)) { try { graph(manager, graphSpace, graph); @@ -126,9 +124,7 @@ public Object list(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$owner=$name"}) public Object get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("name") String name) { LOG.debug("Get graph by name '{}'", name); @@ -142,11 +138,8 @@ public Object get(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space"}) public void drop(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("name") String name, - @Parameter(description = "Confirmation message to drop the graph") @QueryParam("confirm_message") String message) { LOG.debug("Drop graph by name '{}'", name); @@ -161,14 +154,12 @@ public void drop(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"analyst"}) public Object reload(@Context GraphManager manager, - @Parameter( - description = "The action map containing 'action'='reload'") Map actionMap) { LOG.info("[SERVER] Manage graph with action map {}", actionMap); E.checkArgument(actionMap != null && actionMap.containsKey(GRAPH_ACTION), - "Please pass '%s' for graphs manage", GRAPH_ACTION); + "Please pass '%s' for graphs manage", GRAPH_ACTION); String action = actionMap.get(GRAPH_ACTION); if (action.equals(GRAPH_ACTION_RELOAD)) { manager.reload(); @@ -186,19 +177,12 @@ public Object reload(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space"}) public Object create(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name to create") @PathParam("name") String name, - @Parameter(description = "The graph name to clone from (optional)") @QueryParam("clone_graph_name") String clone, - @Parameter( - description = "The graph configuration options including " + - "'backend', 'serializer', 'store' and optionally " + - "'description'") Map configs) { LOG.debug("Create graph {} with config options '{}' in " + - "graph space '{}'", name, configs, graphSpace); + "graph space '{}'", name, configs, graphSpace); GraphSpace gs = manager.graphSpace(graphSpace); HugeGraph graph; E.checkArgumentNotNull(gs, "Not existed graph space: '%s'", graphSpace); @@ -224,18 +208,18 @@ public Object create(@Context GraphManager manager, } else { // Create new graph graph = manager.createGraph(graphSpace, name, creator, - convConfig(configs), true); + convConfig(configs), true); } String description = (String) configs.get(GRAPH_DESCRIPTION); if (description == null) { description = Strings.EMPTY; } Object result = ImmutableMap.of("name", graph.name(), - "nickname", graph.nickname(), - "backend", graph.backend(), - "description", description); + "nickname", graph.nickname(), + "backend", graph.backend(), + "description", description); LOG.info("user [{}] create graph [{}] in graph space [{}] with config " + - "[{}]", creator, name, graphSpace, configs); + "[{}]", creator, name, graphSpace, configs); return result; } @@ -245,9 +229,7 @@ public Object create(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space"}) public File getConf(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("name") String name) { LOG.debug("Get graph configuration by name '{}'", name); @@ -268,12 +250,8 @@ public File getConf(@Context GraphManager manager, @Consumes(APPLICATION_JSON) @RolesAllowed({"space"}) public void clear(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("name") String name, - @Parameter(description = "Confirmation message to clear all data, must be: " + - CONFIRM_CLEAR) @QueryParam("confirm_message") String message) { LOG.debug("Clear graph by name '{}'", name); @@ -289,9 +267,7 @@ public void clear(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space", "$owner=$name"}) public Object createSnapshot(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("name") String name) { LOG.debug("Create snapshot for graph '{}'", name); @@ -306,9 +282,7 @@ public Object createSnapshot(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space", "$owner=$name"}) public Object resumeSnapshot(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("name") String name) { LOG.debug("Resume snapshot for graph '{}'", name); @@ -324,9 +298,7 @@ public Object resumeSnapshot(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space"}) public String compact(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("name") String name) { LOG.debug("Manually compact graph '{}'", name); @@ -341,9 +313,7 @@ public String compact(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space", "$owner=$name"}) public Map mode(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("name") String name, GraphMode mode) { LOG.debug("Set mode to: '{}' of graph '{}'", mode, name); @@ -377,9 +347,7 @@ public Map mode(@Context GraphManager manager, @RolesAllowed({"space"}) public Map graphReadMode( @Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("name") String name, GraphReadMode readMode) { LOG.debug("Set graph-read-mode to: '{}' of graph '{}'", @@ -389,7 +357,7 @@ public Map graphReadMode( "Graph-read-mode can't be null"); E.checkArgument(readMode == GraphReadMode.ALL || readMode == GraphReadMode.OLTP_ONLY, - "Graph-read-mode could be ALL or OLTP_ONLY"); + "Graph-read-mode could be ALL or OLTP_ONLY"); HugeGraph g = graph(manager, graphSpace, name); manager.graphReadMode(graphSpace, name, readMode); g.readMode(readMode); diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/profile/WhiteIpListAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/profile/WhiteIpListAPI.java index a0ee5af9cb..e965ed21a9 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/profile/WhiteIpListAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/profile/WhiteIpListAPI.java @@ -38,7 +38,6 @@ import com.google.common.collect.ImmutableMap; import io.swagger.v3.oas.annotations.Operation; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -140,9 +139,6 @@ public Map updateWhiteIPs(@Context GraphManager manager, @RolesAllowed("admin") @Operation(summary = "enable/disable the white ip list") public Map updateStatus(@Context GraphManager manager, - @Parameter(description = "Status to set: " + - "'true' to enable, " + - "'false' to disable") @QueryParam("status") String status) { LOG.debug("Enable or disable white ip list"); E.checkArgument("true".equals(status) || diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/raft/RaftAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/raft/RaftAPI.java index f868df522e..c981858be0 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/raft/RaftAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/raft/RaftAPI.java @@ -40,7 +40,6 @@ import com.codahale.metrics.annotation.Timed; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -68,11 +67,8 @@ public class RaftAPI extends API { @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member"}) public Map> listPeers(@Context GraphManager manager, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The raft group name") @QueryParam("group") @DefaultValue("default") String group) { @@ -91,11 +87,8 @@ public Map> listPeers(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member"}) public Map getLeader(@Context GraphManager manager, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The raft group name") @QueryParam("group") @DefaultValue("default") String group) { @@ -115,15 +108,11 @@ public Map getLeader(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member"}) public Map transferLeader(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The raft group name") @QueryParam("group") @DefaultValue("default") String group, - @Parameter(description = "The endpoint address") @QueryParam("endpoint") String endpoint) { LOG.debug("Graph [{}] prepare to transfer leader to: {}", @@ -144,17 +133,11 @@ public Map transferLeader(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member"}) public Map setLeader(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The raft group name") @QueryParam("group") @DefaultValue("default") String group, - @Parameter( - description = "The endpoint address to set as " + - "leader") @QueryParam("endpoint") String endpoint) { LOG.debug("Graph [{}] prepare to set leader to: {}", @@ -175,15 +158,10 @@ public Map setLeader(@Context GraphManager manager, @RolesAllowed({"space_member"}) @RedirectFilter.RedirectMasterRole public Map addPeer(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The raft group name") @QueryParam("group") @DefaultValue("default") String group, - @Parameter( - description = "The endpoint address of the peer to add") @QueryParam("endpoint") String endpoint) { LOG.debug("Graph [{}] prepare to add peer: {}", graph, endpoint); @@ -211,16 +189,10 @@ public Map addPeer(@Context GraphManager manager, @RolesAllowed({"space_member"}) @RedirectFilter.RedirectMasterRole public Map removePeer(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The raft group name") @QueryParam("group") @DefaultValue("default") String group, - @Parameter( - description = "The endpoint address of the peer to " + - "remove") @QueryParam("endpoint") String endpoint) { LOG.debug("Graph [{}] prepare to remove peer: {}", graph, endpoint); diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/EdgeLabelAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/EdgeLabelAPI.java index f2026d58bd..0c10827a10 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/EdgeLabelAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/EdgeLabelAPI.java @@ -45,8 +45,6 @@ import com.fasterxml.jackson.annotation.JsonProperty; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -77,9 +75,7 @@ public class EdgeLabelAPI extends API { "$action=edge_label_write"}) @RedirectFilter.RedirectMasterRole public String create(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, JsonEdgeLabel jsonEdgeLabel) { LOG.debug("Graph [{}] create edge label: {}", graph, jsonEdgeLabel); @@ -100,13 +96,9 @@ public String create(@Context GraphManager manager, "$action=edge_label_write"}) @RedirectFilter.RedirectMasterRole public String update(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The edge label name") @PathParam("name") String name, - @Parameter(description = "Action to perform: 'append' or 'remove'") @QueryParam("action") String action, JsonEdgeLabel jsonEdgeLabel) { LOG.debug("Graph [{}] {} edge label: {}", @@ -131,11 +123,8 @@ public String update(@Context GraphManager manager, @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + "$action=edge_label_read"}) public String list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "Filter edge labels by names") @QueryParam("names") List names) { boolean listAll = CollectionUtils.isEmpty(names); if (listAll) { @@ -164,11 +153,8 @@ public String list(@Context GraphManager manager, @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + "$action=edge_label_read"}) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The edge label name") @PathParam("name") String name) { LOG.debug("Graph [{}] get edge label by name '{}'", graph, name); @@ -187,11 +173,8 @@ public String get(@Context GraphManager manager, "$action=edge_label_delete"}) @RedirectFilter.RedirectMasterRole public Map delete(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The edge label name to delete") @PathParam("name") String name) { LOG.debug("Graph [{}] remove edge label by name '{}'", graph, name); @@ -206,55 +189,38 @@ public Map delete(@Context GraphManager manager, * JsonEdgeLabel is only used to receive create and append requests */ @JsonIgnoreProperties(value = {"index_labels", "status"}) - @Schema(description = "Edge label creation/update request") private static class JsonEdgeLabel implements Checkable { - @Schema(description = "The edge label ID (only used in RESTORING mode)") @JsonProperty("id") public long id; - @Schema(description = "The edge label name", required = true) @JsonProperty("name") public String name; - @Schema(description = "The edge label type: NORMAL, EDGE, or RELATION") @JsonProperty("edgelabel_type") public EdgeLabelType edgeLabelType; - @Schema(description = "The parent edge label name (for inheritance)") @JsonProperty("parent_label") public String fatherLabel; - @Schema(description = "The source vertex label name", required = true) @JsonProperty("source_label") public String sourceLabel; - @Schema(description = "The target vertex label name", required = true) @JsonProperty("target_label") public String targetLabel; - @Schema(description = "Links between source and target vertex labels") @JsonProperty("links") public Set> links; - @Schema(description = "The frequency: NORMAL or ONE_DAILY") @JsonProperty("frequency") public Frequency frequency; - @Schema(description = "The property key names associated with this edge label") @JsonProperty("properties") public String[] properties; - @Schema(description = "The sort key names for edge properties") @JsonProperty("sort_keys") public String[] sortKeys; - @Schema(description = "The nullable property key names") @JsonProperty("nullable_keys") public String[] nullableKeys; - @Schema(description = "Time-to-live in seconds") @JsonProperty("ttl") public long ttl; - @Schema(description = "The property key name to use as TTL start time") @JsonProperty("ttl_start_time") public String ttlStartTime; - @Schema(description = "Whether to enable label indexing") @JsonProperty("enable_label_index") public Boolean enableLabelIndex; - @Schema(description = "User-defined metadata") @JsonProperty("user_data") public Userdata userdata; - @Schema(description = "Whether to check if edge label exists before creation") @JsonProperty("check_exist") public Boolean checkExist; diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/IndexLabelAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/IndexLabelAPI.java index b76d532360..9e60b01076 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/IndexLabelAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/IndexLabelAPI.java @@ -45,8 +45,6 @@ import com.fasterxml.jackson.annotation.JsonProperty; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -77,9 +75,7 @@ public class IndexLabelAPI extends API { "$action=index_label_write"}) @RedirectFilter.RedirectMasterRole public String create(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, JsonIndexLabel jsonIndexLabel) { LOG.debug("Graph [{}] create index label: {}", graph, jsonIndexLabel); @@ -99,13 +95,9 @@ public String create(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RedirectFilter.RedirectMasterRole public String update(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The index label name") @PathParam("name") String name, - @Parameter(description = "Action to perform: 'append' or 'remove'") @QueryParam("action") String action, IndexLabelAPI.JsonIndexLabel jsonIndexLabel) { LOG.debug("Graph [{}] {} index label: {}", @@ -129,11 +121,8 @@ public String update(@Context GraphManager manager, @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + "$action=index_label_read"}) public String list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "Filter index labels by names") @QueryParam("names") List names) { boolean listAll = CollectionUtils.isEmpty(names); if (listAll) { @@ -162,11 +151,8 @@ public String list(@Context GraphManager manager, @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + "$action=index_label_read"}) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The index label name") @PathParam("name") String name) { LOG.debug("Graph [{}] get index label by name '{}'", graph, name); @@ -185,11 +171,8 @@ public String get(@Context GraphManager manager, "$action=index_label_delete"}) @RedirectFilter.RedirectMasterRole public Map delete(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The index label name to delete") @PathParam("name") String name) { LOG.debug("Graph [{}] remove index label by name '{}'", graph, name); @@ -223,34 +206,24 @@ private static IndexLabel mapIndexLabel(IndexLabel label) { * JsonIndexLabel is only used to receive create and append requests */ @JsonIgnoreProperties(value = {"status"}) - @Schema(description = "Index label creation/update request") private static class JsonIndexLabel implements Checkable { - @Schema(description = "The index label ID (only used in RESTORING mode)") @JsonProperty("id") public long id; - @Schema(description = "The index label name", required = true) @JsonProperty("name") public String name; - @Schema(description = "The base type: VERTEX or EDGE", required = true) @JsonProperty("base_type") public HugeType baseType; - @Schema(description = "The base label name (vertex/edge label name)", required = true) @JsonProperty("base_value") public String baseValue; - @Schema(description = "The index type: SECONDARY, RANGE, SEARCH, or VECTOR") @JsonProperty("index_type") public IndexType indexType; - @Schema(description = "The property key names to build index on", required = true) @JsonProperty("fields") public String[] fields; - @Schema(description = "User-defined metadata") @JsonProperty("user_data") public Userdata userdata; - @Schema(description = "Whether to check if index label exists before creation") @JsonProperty("check_exist") public Boolean checkExist; - @Schema(description = "Whether to rebuild the index after creation") @JsonProperty("rebuild") public Boolean rebuild; diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/PropertyKeyAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/PropertyKeyAPI.java index 27d6ab1da2..a23fa3e80e 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/PropertyKeyAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/PropertyKeyAPI.java @@ -48,8 +48,6 @@ import com.fasterxml.jackson.annotation.JsonProperty; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -77,12 +75,10 @@ public class PropertyKeyAPI extends API { @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + - "$action=property_key_write"}) + "$action=property_key_write"}) @RedirectFilter.RedirectMasterRole public String create(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, JsonPropertyKey jsonPropertyKey) { LOG.debug("Graph [{}] create property key: {}", graph, jsonPropertyKey); @@ -101,20 +97,12 @@ public String create(@Context GraphManager manager, @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + - "$action=property_key_write"}) + "$action=property_key_write"}) @RedirectFilter.RedirectMasterRole public String update(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The property key name") @PathParam("name") String name, - @Parameter( - description = - "Action to perform: 'append' to add new properties, " + - "'remove' to delete existing properties, " + - "'clear' to clear OLAP property data") @QueryParam("action") String action, PropertyKeyAPI.JsonPropertyKey jsonPropertyKey) { LOG.debug("Graph [{}] {} property key: {}", @@ -152,13 +140,10 @@ public String update(@Context GraphManager manager, @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + - "$action=property_key_read"}) + "$action=property_key_read"}) public String list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "Filter property keys by names") @QueryParam("names") List names) { boolean listAll = CollectionUtils.isEmpty(names); if (listAll) { @@ -185,7 +170,7 @@ public String list(@Context GraphManager manager, @Path("{name}") @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + - "$action=property_key_read"}) + "$action=property_key_read"}) public String get(@Context GraphManager manager, @PathParam("graphspace") String graphSpace, @PathParam("graph") String graph, @@ -204,14 +189,11 @@ public String get(@Context GraphManager manager, @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + - "$action=property_key_delete"}) + "$action=property_key_delete"}) @RedirectFilter.RedirectMasterRole public Map delete(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The property key name to delete") @PathParam("name") String name) { LOG.debug("Graph [{}] remove property key by name '{}'", graph, name); @@ -226,36 +208,24 @@ public Map delete(@Context GraphManager manager, * JsonPropertyKey is only used to receive create and append requests */ @JsonIgnoreProperties(value = {"status"}) - @Schema(description = "Property key creation/update request") private static class JsonPropertyKey implements Checkable { - @Schema(description = "The property key ID (only used in RESTORING mode)") @JsonProperty("id") public long id; - @Schema(description = "The property key name", required = true) @JsonProperty("name") public String name; - @Schema(description = "The cardinality: SINGLE, LIST, or SET") @JsonProperty("cardinality") public Cardinality cardinality; - @Schema(description = "The data type: STRING, TEXT, INT, LONG, FLOAT, " + - "DOUBLE, BLOB, BOOLEAN, DATE, UUID") @JsonProperty("data_type") public DataType dataType; - @Schema(description = "The aggregate type: NONE, SUM, MAX, MIN, SUB, " + - "SET, INC, BIGDECIMAL") @JsonProperty("aggregate_type") public AggregateType aggregateType; - @Schema(description = "The write type: OLTP, OLAP, IMMUTABLE") @JsonProperty("write_type") public WriteType writeType; - @Schema(description = "Parent property keys for meta property") @JsonProperty("properties") public String[] properties; - @Schema(description = "User-defined metadata") @JsonProperty("user_data") public Userdata userdata; - @Schema(description = "Whether to check if property key exists before creation") @JsonProperty("check_exist") public Boolean checkExist; diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/VertexLabelAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/VertexLabelAPI.java index c86622f7e5..70f448f288 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/VertexLabelAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/schema/VertexLabelAPI.java @@ -43,8 +43,6 @@ import com.fasterxml.jackson.annotation.JsonProperty; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -75,9 +73,7 @@ public class VertexLabelAPI extends API { "$action=vertex_label_write"}) @RedirectFilter.RedirectMasterRole public String create(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, JsonVertexLabel jsonVertexLabel) { LOG.debug("Graph [{}] create vertex label: {}", @@ -99,13 +95,9 @@ public String create(@Context GraphManager manager, "$action=vertex_label_write"}) @RedirectFilter.RedirectMasterRole public String update(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The vertex label name") @PathParam("name") String name, - @Parameter(description = "Action to perform: 'append' or 'remove'") @QueryParam("action") String action, JsonVertexLabel jsonVertexLabel) { LOG.debug("Graph [{}] {} vertex label: {}", @@ -132,11 +124,8 @@ public String update(@Context GraphManager manager, @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + "$action=vertex_label_read"}) public String list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "Filter vertex labels by names") @QueryParam("names") List names) { boolean listAll = CollectionUtils.isEmpty(names); if (listAll) { @@ -165,11 +154,8 @@ public String list(@Context GraphManager manager, @RolesAllowed({"space_member", "$graphspace=$graphspace $owner=$graph " + "$action=vertex_label_read"}) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The vertex label name") @PathParam("name") String name) { LOG.debug("Graph [{}] get vertex label by name '{}'", graph, name); @@ -188,11 +174,8 @@ public String get(@Context GraphManager manager, "$action=vertex_label_delete"}) @RedirectFilter.RedirectMasterRole public Map delete(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The vertex label name to delete") @PathParam("name") String name) { LOG.debug("Graph [{}] remove vertex label by name '{}'", graph, name); @@ -207,41 +190,28 @@ public Map delete(@Context GraphManager manager, * JsonVertexLabel is only used to receive create and append requests */ @JsonIgnoreProperties(value = {"index_labels", "status"}) - @Schema(description = "Vertex label creation/update request") private static class JsonVertexLabel implements Checkable { - @Schema(description = "The vertex label ID (only used in RESTORING mode)") @JsonProperty("id") public long id; - @Schema(description = "The vertex label name", required = true) @JsonProperty("name") public String name; - @Schema(description = "The ID strategy: AUTOMATIC, PRIMARY_KEY, " + - "CUSTOMIZE_STRING, CUSTOMIZE_NUMBER, CUSTOMIZE_UUID") @JsonProperty("id_strategy") public IdStrategy idStrategy; - @Schema(description = "The property key names associated with this vertex label") @JsonProperty("properties") public String[] properties; - @Schema(description = "The primary key names (used with PRIMARY_KEY strategy)") @JsonProperty("primary_keys") public String[] primaryKeys; - @Schema(description = "The nullable property key names") @JsonProperty("nullable_keys") public String[] nullableKeys; - @Schema(description = "Time-to-live in seconds") @JsonProperty("ttl") public long ttl; - @Schema(description = "The property key name to use as TTL start time") @JsonProperty("ttl_start_time") public String ttlStartTime; - @Schema(description = "Whether to enable label indexing") @JsonProperty("enable_label_index") public Boolean enableLabelIndex; - @Schema(description = "User-defined metadata") @JsonProperty("user_data") public Userdata userdata; - @Schema(description = "Whether to check if vertex label exists before creation") @JsonProperty("check_exist") public Boolean checkExist; diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/space/GraphSpaceAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/space/GraphSpaceAPI.java index 35bc40aed0..bd0fb4e84c 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/space/GraphSpaceAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/space/GraphSpaceAPI.java @@ -45,8 +45,6 @@ import com.fasterxml.jackson.annotation.JsonProperty; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Singleton; @@ -78,7 +76,6 @@ public class GraphSpaceAPI extends API { @Produces(APPLICATION_JSON_WITH_CHARSET) public Object list(@Context GraphManager manager, @Context SecurityContext sc) { - ensurePdModeEnabled(manager); Set spaces = manager.graphSpaces(); return ImmutableMap.of("graphSpaces", spaces); } @@ -88,9 +85,7 @@ public Object list(@Context GraphManager manager, @Path("{graphspace}") @Produces(APPLICATION_JSON_WITH_CHARSET) public Object get(@Context GraphManager manager, - @Parameter(description = "The name of the graph space") @PathParam("graphspace") String graphSpace) { - ensurePdModeEnabled(manager); manager.getSpaceStorage(graphSpace); GraphSpace gs = space(manager, graphSpace); @@ -109,11 +104,8 @@ public Object get(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"admin"}) public Object listProfile(@Context GraphManager manager, - @Parameter(description = "Filter graph spaces by " + - "name or nickname prefix") @QueryParam("prefix") String prefix, @Context SecurityContext sc) { - ensurePdModeEnabled(manager); Set spaces = manager.graphSpaces(); List> spaceList = new ArrayList<>(); List> result = new ArrayList<>(); @@ -163,7 +155,7 @@ public Object listProfile(@Context GraphManager manager, @RolesAllowed({"admin"}) public String create(@Context GraphManager manager, JsonGraphSpace jsonGraphSpace) { - ensurePdModeEnabled(manager); + jsonGraphSpace.checkCreate(false); String creator = HugeGraphAuthProxy.username(); @@ -192,10 +184,9 @@ public boolean isPrefix(Map profile, String prefix) { @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"admin"}) public Map manage(@Context GraphManager manager, - @Parameter(description = "The name of the graph space") @PathParam("name") String name, Map actionMap) { - ensurePdModeEnabled(manager); + E.checkArgument(actionMap != null && actionMap.size() == 2 && actionMap.containsKey(GRAPH_SPACE_ACTION), "Invalid request body '%s'", actionMap); @@ -323,9 +314,7 @@ public Map manage(@Context GraphManager manager, @Produces(APPLICATION_JSON_WITH_CHARSET) @RolesAllowed({"admin"}) public void delete(@Context GraphManager manager, - @Parameter(description = "The name of the graph space") @PathParam("name") String name) { - ensurePdModeEnabled(manager); manager.dropGraphSpace(name); } @@ -349,70 +338,51 @@ private boolean verifyPermission(String user, AuthManager authManager, String gr private static class JsonGraphSpace implements Checkable { @JsonProperty("name") - @Schema(description = "The name of the graph space", required = true) public String name; @JsonProperty("nickname") - @Schema(description = "The nickname of the graph space") public String nickname; @JsonProperty("description") - @Schema(description = "The description of the graph space") public String description; @JsonProperty("cpu_limit") - @Schema(description = "The CPU limit for the graph space", required = true) public int cpuLimit; @JsonProperty("memory_limit") - @Schema(description = "The memory limit for the graph space", required = true) public int memoryLimit; @JsonProperty("storage_limit") - @Schema(description = "The storage limit for the graph space", required = true) public int storageLimit; @JsonProperty("compute_cpu_limit") - @Schema(description = "The compute CPU limit for the graph space") public int computeCpuLimit = 0; @JsonProperty("compute_memory_limit") - @Schema(description = "The compute memory limit for the graph space") public int computeMemoryLimit = 0; @JsonProperty("oltp_namespace") - @Schema(description = "The OLTP namespace for the graph space") public String oltpNamespace = ""; @JsonProperty("olap_namespace") - @Schema(description = "The OLAP namespace for the graph space") public String olapNamespace = ""; @JsonProperty("storage_namespace") - @Schema(description = "The storage namespace for the graph space") public String storageNamespace = ""; @JsonProperty("max_graph_number") - @Schema(description = "The maximum number of graphs allowed in the space", required = true) public int maxGraphNumber; @JsonProperty("max_role_number") - @Schema(description = "The maximum number of roles allowed in the space") public int maxRoleNumber; @JsonProperty("dp_username") - @Schema(description = "The data platform username for the graph space") public String dpUserName; @JsonProperty("dp_password") - @Schema(description = "The data platform password for the graph space") public String dpPassWord; @JsonProperty("auth") - @Schema(description = "Whether authentication is enabled for the graph space") public boolean auth = false; @JsonProperty("configs") - @Schema(description = "Additional configurations for the graph space") public Map configs; @JsonProperty("operator_image_path") - @Schema(description = "The operator image path for the graph space") public String operatorImagePath = ""; @JsonProperty("internal_algorithm_image_url") - @Schema(description = "The internal algorithm image URL for the graph space") public String internalAlgorithmImageUrl = ""; @Override @@ -488,13 +458,10 @@ public String toString() { private static class JsonDefaultRole implements Checkable { @JsonProperty("user") - @Schema(description = "The username") private String user; @JsonProperty("role") - @Schema(description = "The role name") private String role; @JsonProperty("graph") - @Schema(description = "The graph name") private String graph; @Override diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/AllShortestPathsAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/AllShortestPathsAPI.java index 3880b1239e..beefdea25b 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/AllShortestPathsAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/AllShortestPathsAPI.java @@ -41,7 +41,6 @@ import com.codahale.metrics.annotation.Timed; import com.google.common.collect.ImmutableList; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.DefaultValue; @@ -63,33 +62,21 @@ public class AllShortestPathsAPI extends API { @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The source vertex ID") @QueryParam("source") String source, - @Parameter(description = "The target vertex ID") @QueryParam("target") String target, - @Parameter(description = "The direction of traversal") @QueryParam("direction") String direction, - @Parameter(description = "The edge label to traverse") @QueryParam("label") String edgeLabel, - @Parameter(description = "The maximum depth of traversal") @QueryParam("max_depth") int depth, - @Parameter(description = "The maximum degree of vertices") @QueryParam("max_degree") @DefaultValue(DEFAULT_MAX_DEGREE) long maxDegree, - @Parameter(description = "The degree to skip") @QueryParam("skip_degree") @DefaultValue("0") long skipDegree, - @Parameter(description = "Whether to include vertex details") @QueryParam("with_vertex") @DefaultValue("false") boolean withVertex, - @Parameter(description = "Whether to include edge details") @QueryParam("with_edge") @DefaultValue("false") boolean withEdge, - @Parameter(description = "The capacity of the traversal") @QueryParam("capacity") @DefaultValue(DEFAULT_CAPACITY) long capacity) { LOG.debug("Graph [{}] get shortest path from '{}', to '{}' with " + diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/CountAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/CountAPI.java index 7dd58dd892..e14f0a43df 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/CountAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/CountAPI.java @@ -42,8 +42,6 @@ import com.fasterxml.jackson.annotation.JsonProperty; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.POST; @@ -63,9 +61,7 @@ public class CountAPI extends API { @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) public String post(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, CountRequest request) { LOG.debug("Graph [{}] get count from '{}' with request {}", @@ -104,16 +100,12 @@ private static List steps(HugeGraph graph, CountRequest request) { private static class CountRequest { @JsonProperty("source") - @Schema(description = "The source vertex ID", required = true) public Object source; @JsonProperty("steps") - @Schema(description = "The steps to traverse", required = true) public List steps; @JsonProperty("contains_traversed") - @Schema(description = "Whether to include traversed vertices") public boolean containsTraversed = false; @JsonProperty("dedup_size") - @Schema(description = "The deduplication size limit") public long dedupSize = 1000000L; @Override @@ -128,20 +120,15 @@ public String toString() { private static class Step { @JsonProperty("direction") - @Schema(description = "The direction of traversal", example = "BOTH") public Directions direction = Directions.BOTH; @JsonProperty("labels") - @Schema(description = "The edge labels to traverse") public List labels; @JsonProperty("properties") - @Schema(description = "The properties to filter edges") public Map properties; @JsonAlias("degree") @JsonProperty("max_degree") - @Schema(description = "The maximum degree of vertices to traverse") public long maxDegree = Long.parseLong(DEFAULT_MAX_DEGREE); @JsonProperty("skip_degree") - @Schema(description = "The degree to skip when traversing") public long skipDegree = Long.parseLong(DEFAULT_SKIP_DEGREE); @Override diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/EdgesAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/EdgesAPI.java index 807fcce92c..b3d718d4f0 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/EdgesAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/EdgesAPI.java @@ -38,7 +38,6 @@ import com.codahale.metrics.annotation.Timed; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.DefaultValue; @@ -88,7 +87,6 @@ public String list(@Context GraphManager manager, public String shards(@Context GraphManager manager, @PathParam("graphspace") String graphSpace, @PathParam("graph") String graph, - @Parameter(description = "The split size for shards") @QueryParam("split_size") long splitSize) { LOG.debug("Graph [{}] get vertex shards with split size '{}'", graph, splitSize); diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/KneighborAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/KneighborAPI.java index 83183d08c5..3912d9c764 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/KneighborAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/KneighborAPI.java @@ -50,7 +50,6 @@ import com.google.common.collect.ImmutableList; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.Consumes; @@ -74,25 +73,16 @@ public class KneighborAPI extends TraverserAPI { @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The source vertex ID") @QueryParam("source") String sourceV, - @Parameter(description = "The direction of traversal") @QueryParam("direction") String direction, - @Parameter(description = "The edge label to traverse") @QueryParam("label") String edgeLabel, - @Parameter(description = "The maximum depth of traversal") @QueryParam("max_depth") int depth, - @Parameter(description = "Whether to return only count") @QueryParam("count_only") @DefaultValue("false") boolean countOnly, - @Parameter(description = "The maximum degree of vertices") @QueryParam("max_degree") @DefaultValue(DEFAULT_MAX_DEGREE) long maxDegree, - @Parameter(description = "The maximum number of results") @QueryParam("limit") @DefaultValue(DEFAULT_ELEMENTS_LIMIT) int limit) { LOG.debug("Graph [{}] get k-neighbor from '{}' with " + diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/KoutAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/KoutAPI.java index e784e38b40..2a0e29662f 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/KoutAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/KoutAPI.java @@ -50,7 +50,6 @@ import com.google.common.collect.ImmutableList; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.Consumes; @@ -74,31 +73,20 @@ public class KoutAPI extends TraverserAPI { @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The source vertex ID") @QueryParam("source") String source, - @Parameter(description = "The direction of traversal") @QueryParam("direction") String direction, - @Parameter(description = "The edge label to traverse") @QueryParam("label") String edgeLabel, - @Parameter(description = "The maximum depth of traversal") @QueryParam("max_depth") int depth, - @Parameter(description = "Whether to find nearest vertices first") @QueryParam("nearest") @DefaultValue("true") boolean nearest, - @Parameter(description = "Whether to return only count") @QueryParam("count_only") @DefaultValue("false") boolean count_only, - @Parameter(description = "The maximum degree of vertices") @QueryParam("max_degree") @DefaultValue(DEFAULT_MAX_DEGREE) long maxDegree, - @Parameter(description = "The capacity of the traversal") @QueryParam("capacity") @DefaultValue(DEFAULT_CAPACITY) long capacity, - @Parameter(description = "The maximum number of results") @QueryParam("limit") @DefaultValue(DEFAULT_ELEMENTS_LIMIT) int limit) { LOG.debug("Graph [{}] get k-out from '{}' with " + diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/ShortestPathAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/ShortestPathAPI.java index 6a6ecd2317..e53d7a7d1b 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/ShortestPathAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/ShortestPathAPI.java @@ -41,7 +41,6 @@ import com.google.common.collect.ImmutableList; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.DefaultValue; @@ -63,33 +62,21 @@ public class ShortestPathAPI extends API { @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The source vertex ID") @QueryParam("source") String source, - @Parameter(description = "The target vertex ID") @QueryParam("target") String target, - @Parameter(description = "The direction of traversal") @QueryParam("direction") String direction, - @Parameter(description = "The edge label to traverse") @QueryParam("label") String edgeLabel, - @Parameter(description = "The maximum depth of traversal") @QueryParam("max_depth") int depth, - @Parameter(description = "The maximum degree of vertices") @QueryParam("max_degree") @DefaultValue(DEFAULT_MAX_DEGREE) long maxDegree, - @Parameter(description = "The degree to skip") @QueryParam("skip_degree") @DefaultValue("0") long skipDegree, - @Parameter(description = "Whether to include vertex details") @QueryParam("with_vertex") @DefaultValue("false") boolean withVertex, - @Parameter(description = "Whether to include edge details") @QueryParam("with_edge") @DefaultValue("false") boolean withEdge, - @Parameter(description = "The capacity of the traversal") @QueryParam("capacity") @DefaultValue(DEFAULT_CAPACITY) long capacity) { LOG.debug("Graph [{}] get shortest path from '{}', to '{}' with " + diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/TraverserAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/TraverserAPI.java index 28f776a3e6..923b3d43fa 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/TraverserAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/TraverserAPI.java @@ -32,8 +32,6 @@ import com.fasterxml.jackson.annotation.JsonAlias; import com.fasterxml.jackson.annotation.JsonProperty; -import io.swagger.v3.oas.annotations.media.Schema; - public class TraverserAPI extends API { protected static EdgeStep step(HugeGraph graph, Step step) { @@ -63,20 +61,15 @@ protected static Steps steps(HugeGraph graph, VESteps steps) { protected static class Step { @JsonProperty("direction") - @Schema(description = "The direction of traversal", example = "BOTH") public Directions direction; @JsonProperty("labels") - @Schema(description = "The edge labels to traverse") public List labels; @JsonProperty("properties") - @Schema(description = "The properties to filter edges") public Map properties; @JsonAlias("degree") @JsonProperty("max_degree") - @Schema(description = "The maximum degree of vertices to traverse") public long maxDegree = Long.parseLong(DEFAULT_MAX_DEGREE); @JsonProperty("skip_degree") - @Schema(description = "The degree to skip when traversing") public long skipDegree = 0L; @Override @@ -91,11 +84,9 @@ public String toString() { protected static class VEStepEntity { @JsonProperty("label") - @Schema(description = "The label of the step") public String label; @JsonProperty("properties") - @Schema(description = "The properties for the step") public Map properties; @Override @@ -108,20 +99,15 @@ public String toString() { protected static class VESteps { @JsonProperty("direction") - @Schema(description = "The direction of traversal", example = "BOTH") public Directions direction; @JsonAlias("degree") @JsonProperty("max_degree") - @Schema(description = "The maximum degree of vertices to traverse") public long maxDegree = Long.parseLong(DEFAULT_MAX_DEGREE); @JsonProperty("skip_degree") - @Schema(description = "The degree to skip when traversing") public long skipDegree = 0L; @JsonProperty("vertex_steps") - @Schema(description = "The vertex steps in the traversal") public List vSteps; @JsonProperty("edge_steps") - @Schema(description = "The edge steps in the traversal") public List eSteps; @Override diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/Vertices.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/Vertices.java index 3efee83ab2..d5be694893 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/Vertices.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/Vertices.java @@ -36,18 +36,13 @@ import com.fasterxml.jackson.annotation.JsonProperty; -import io.swagger.v3.oas.annotations.media.Schema; - public class Vertices { @JsonProperty("ids") - @Schema(description = "The vertex IDs", example = "[\"1:Tom\", \"2:Mary\"]") public Set ids; @JsonProperty("label") - @Schema(description = "The vertex label", example = "person") public String label; @JsonProperty("properties") - @Schema(description = "The vertex properties in key-value format") public Map properties; public Iterator vertices(HugeGraph g) { diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/VerticesAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/VerticesAPI.java index 2f853ec352..762bbf81c6 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/VerticesAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/VerticesAPI.java @@ -38,7 +38,6 @@ import com.codahale.metrics.annotation.Timed; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.DefaultValue; @@ -61,11 +60,8 @@ public class VerticesAPI extends API { @Compress @Produces(APPLICATION_JSON_WITH_CHARSET) public String list(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The vertex IDs") @QueryParam("ids") List stringIds) { LOG.debug("Graph [{}] get vertices by ids: {}", graph, stringIds); diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/WeightedShortestPathAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/WeightedShortestPathAPI.java index e705bfba09..3cea3702db 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/WeightedShortestPathAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/traversers/WeightedShortestPathAPI.java @@ -41,7 +41,6 @@ import com.codahale.metrics.annotation.Timed; -import io.swagger.v3.oas.annotations.Parameter; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.DefaultValue; @@ -63,32 +62,21 @@ public class WeightedShortestPathAPI extends API { @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) public String get(@Context GraphManager manager, - @Parameter(description = "The graph space name") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The graph name") @PathParam("graph") String graph, - @Parameter(description = "The source vertex ID") @QueryParam("source") String source, - @Parameter(description = "The target vertex ID") @QueryParam("target") String target, - @Parameter(description = "The direction of traversal") @QueryParam("direction") String direction, - @Parameter(description = "The edge label to traverse") @QueryParam("label") String edgeLabel, - @Parameter(description = "The weight property name") @QueryParam("weight") String weight, - @Parameter(description = "The maximum degree of vertices") @QueryParam("max_degree") @DefaultValue(DEFAULT_MAX_DEGREE) long maxDegree, - @Parameter(description = "The degree to skip") @QueryParam("skip_degree") + @QueryParam("skip_degree") @DefaultValue("0") long skipDegree, - @Parameter(description = "Whether to include vertex details") @QueryParam("with_vertex") @DefaultValue("false") boolean withVertex, - @Parameter(description = "Whether to include edge details") @QueryParam("with_edge") @DefaultValue("false") boolean withEdge, - @Parameter(description = "The capacity of the traversal") @QueryParam("capacity") @DefaultValue(DEFAULT_CAPACITY) long capacity) { LOG.debug("Graph [{}] get weighted shortest path between '{}' and " + diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/variables/VariablesAPI.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/variables/VariablesAPI.java index 680c42c7e3..0d878d9262 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/variables/VariablesAPI.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/api/variables/VariablesAPI.java @@ -30,8 +30,6 @@ import com.codahale.metrics.annotation.Timed; import com.google.common.collect.ImmutableMap; -import io.swagger.v3.oas.annotations.Parameter; -import io.swagger.v3.oas.annotations.media.Schema; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.inject.Singleton; import jakarta.ws.rs.Consumes; @@ -57,11 +55,8 @@ public class VariablesAPI extends API { @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON_WITH_CHARSET) public Map update(@Context GraphManager manager, - @Parameter(description = "The name of the graph space") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The name of the graph") @PathParam("graph") String graph, - @Parameter(description = "The key of the variable") @PathParam("key") String key, JsonVariableValue value) { E.checkArgument(value != null && value.data != null, @@ -77,9 +72,7 @@ public Map update(@Context GraphManager manager, @Timed @Produces(APPLICATION_JSON_WITH_CHARSET) public Map list(@Context GraphManager manager, - @Parameter(description = "The name of the graph space") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The name of the graph") @PathParam("graph") String graph) { LOG.debug("Graph [{}] get variables", graph); @@ -92,11 +85,8 @@ public Map list(@Context GraphManager manager, @Path("{key}") @Produces(APPLICATION_JSON_WITH_CHARSET) public Map get(@Context GraphManager manager, - @Parameter(description = "The name of the graph space") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The name of the graph") @PathParam("graph") String graph, - @Parameter(description = "The key of the variable") @PathParam("key") String key) { LOG.debug("Graph [{}] get variable by key '{}'", graph, key); @@ -114,11 +104,8 @@ public Map get(@Context GraphManager manager, @Path("{key}") @Consumes(APPLICATION_JSON) public void delete(@Context GraphManager manager, - @Parameter(description = "The name of the graph space") @PathParam("graphspace") String graphSpace, - @Parameter(description = "The name of the graph") @PathParam("graph") String graph, - @Parameter(description = "The key of the variable") @PathParam("key") String key) { LOG.debug("Graph [{}] remove variable by key '{}'", graph, key); @@ -128,7 +115,6 @@ public void delete(@Context GraphManager manager, private static class JsonVariableValue { - @Schema(description = "The value of the variable", required = true) public Object data; @Override diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/config/ServerOptions.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/config/ServerOptions.java index 278542854b..f699ac199c 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/config/ServerOptions.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/config/ServerOptions.java @@ -556,9 +556,9 @@ public class ServerOptions extends OptionHolder { public static final ConfigOption SERVER_ID = new ConfigOption<>( "server.id", - "The id of hugegraph-server.", - disallowEmpty(), - "server-1" + "The id of hugegraph-server, auto-generated if not specified.", + null, + "" ); public static final ConfigOption SERVER_ROLE = new ConfigOption<>( diff --git a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/core/GraphManager.java b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/core/GraphManager.java index 770e75cc74..26993fa2bd 100644 --- a/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/core/GraphManager.java +++ b/hugegraph-server/hugegraph-api/src/main/java/org/apache/hugegraph/core/GraphManager.java @@ -33,6 +33,7 @@ import java.util.Map; import java.util.Objects; import java.util.Set; +import java.util.UUID; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; @@ -68,6 +69,7 @@ import org.apache.hugegraph.config.TypedOption; import org.apache.hugegraph.event.EventHub; import org.apache.hugegraph.exception.ExistedException; +import org.apache.hugegraph.exception.NotFoundException; import org.apache.hugegraph.exception.NotSupportException; import org.apache.hugegraph.io.HugeGraphSONModule; import org.apache.hugegraph.k8s.K8sDriver; @@ -195,7 +197,17 @@ public final class GraphManager { public GraphManager(HugeConfig conf, EventHub hub) { LOG.info("Init graph manager"); E.checkArgumentNotNull(conf, "The config can't be null"); + + // Auto-generate server.id if not configured. + // Random generation is to prevent duplicate id error reports.This id is currently + // meaningless and needs to be completely removed serverInfoManager in + // the future String server = conf.get(ServerOptions.SERVER_ID); + if (StringUtils.isEmpty(server)) { + server = "server-" + UUID.randomUUID().toString().substring(0, 8); + LOG.info("Auto-generated server.id: {}", server); + conf.setProperty(ServerOptions.SERVER_ID.name(), server); + } String role = conf.get(ServerOptions.SERVER_ROLE); this.config = conf; @@ -206,10 +218,6 @@ public GraphManager(HugeConfig conf, EventHub hub) { conf.get(ServerOptions.SERVER_DEPLOY_IN_K8S); this.startIgnoreSingleGraphError = conf.get( ServerOptions.SERVER_START_IGNORE_SINGLE_GRAPH_ERROR); - E.checkArgument(server != null && !server.isEmpty(), - "The server name can't be null or empty"); - E.checkArgument(role != null && !role.isEmpty(), - "The server role can't be null or empty"); this.graphsDir = conf.get(ServerOptions.GRAPHS); this.cluster = conf.get(ServerOptions.CLUSTER); this.graphSpaces = new ConcurrentHashMap<>(); @@ -1557,6 +1565,14 @@ private void loadGraph(String name, String graphConfPath) { String raftGroupPeers = this.conf.get(ServerOptions.RAFT_GROUP_PEERS); config.addProperty(ServerOptions.RAFT_GROUP_PEERS.name(), raftGroupPeers); + + // Transfer `pd.peers` from server config to graph config + // Only inject if not already configured in graph config + if (!config.containsKey("pd.peers")) { + String pdPeers = this.conf.get(ServerOptions.PD_PEERS); + config.addProperty("pd.peers", pdPeers); + } + this.transferRoleWorkerConfig(config); Graph graph = GraphFactory.open(config); @@ -1637,10 +1653,6 @@ private void checkBackendVersionOrExit(HugeConfig config) { private void initNodeRole() { String id = config.get(ServerOptions.SERVER_ID); String role = config.get(ServerOptions.SERVER_ROLE); - E.checkArgument(StringUtils.isNotEmpty(id), - "The server name can't be null or empty"); - E.checkArgument(StringUtils.isNotEmpty(role), - "The server role can't be null or empty"); NodeRole nodeRole = NodeRole.valueOf(role.toUpperCase()); boolean supportRoleElection = !nodeRole.computer() && @@ -1960,7 +1972,7 @@ public HugeGraph graph(String graphSpace, String name) { } else if (graph instanceof HugeGraph) { return (HugeGraph) graph; } - throw new NotSupportException("graph instance of %s", graph.getClass()); + throw new NotFoundException(String.format("Graph '%s' does not exist", name)); } public void dropGraphLocal(String name) { diff --git a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/StandardHugeGraph.java b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/StandardHugeGraph.java index faf97aa8d6..5864e2a615 100644 --- a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/StandardHugeGraph.java +++ b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/StandardHugeGraph.java @@ -176,7 +176,6 @@ public class StandardHugeGraph implements HugeGraph { private final BackendStoreProvider storeProvider; private final TinkerPopTransaction tx; private final RamTable ramtable; - private final String schedulerType; private volatile boolean started; private volatile boolean closed; private volatile GraphMode mode; @@ -229,7 +228,6 @@ public StandardHugeGraph(HugeConfig config) { this.closed = false; this.mode = GraphMode.NONE; this.readMode = GraphReadMode.OLTP_ONLY; - this.schedulerType = config.get(CoreOptions.SCHEDULER_TYPE); LockUtil.init(this.spaceGraphName()); @@ -315,6 +313,7 @@ public String backend() { return this.storeProvider.type(); } + @Override public BackendStoreInfo backendStoreInfo() { // Just for trigger Tx.getOrNewTransaction, then load 3 stores // TODO: pass storeProvider.metaStore() @@ -465,6 +464,7 @@ public void updateTime(Date updateTime) { this.updateTime = updateTime; } + @Override public void waitStarted() { // Just for trigger Tx.getOrNewTransaction, then load 3 stores this.schemaTransaction(); @@ -1629,7 +1629,9 @@ public void submitEphemeralJob(EphemeralJob job) { @Override public String schedulerType() { - return StandardHugeGraph.this.schedulerType; + // Use distributed scheduler for hstore backend, otherwise use local + // After the merger of rocksdb and hstore, consider whether to change this logic + return StandardHugeGraph.this.isHstore() ? "distributed" : "local"; } } diff --git a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/config/CoreOptions.java b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/config/CoreOptions.java index ba4d4a1c0e..72a2da9324 100644 --- a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/config/CoreOptions.java +++ b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/config/CoreOptions.java @@ -303,13 +303,7 @@ public class CoreOptions extends OptionHolder { rangeInt(1, 500), 1 ); - public static final ConfigOption SCHEDULER_TYPE = - new ConfigOption<>( - "task.scheduler_type", - "The type of scheduler used in distribution system.", - allowValues("local", "distributed"), - "local" - ); + public static final ConfigOption TASK_SYNC_DELETION = new ConfigOption<>( "task.sync_deletion", diff --git a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/masterelection/GlobalMasterInfo.java b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/masterelection/GlobalMasterInfo.java index c345c50e60..4856744459 100644 --- a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/masterelection/GlobalMasterInfo.java +++ b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/masterelection/GlobalMasterInfo.java @@ -22,7 +22,7 @@ import org.apache.hugegraph.type.define.NodeRole; import org.apache.hugegraph.util.E; -// TODO: rename to GlobalNodeRoleInfo +// TODO: We need to completely delete the startup of master-worker public final class GlobalMasterInfo { private static final NodeInfo NO_MASTER = new NodeInfo(false, ""); diff --git a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/masterelection/StandardRoleListener.java b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/masterelection/StandardRoleListener.java index dbbea6d91e..74515dacec 100644 --- a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/masterelection/StandardRoleListener.java +++ b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/masterelection/StandardRoleListener.java @@ -17,12 +17,12 @@ package org.apache.hugegraph.masterelection; -import java.util.Objects; - import org.apache.hugegraph.task.TaskManager; import org.apache.hugegraph.util.Log; import org.slf4j.Logger; +import java.util.Objects; + public class StandardRoleListener implements RoleListener { private static final Logger LOG = Log.logger(StandardRoleListener.class); @@ -36,7 +36,6 @@ public class StandardRoleListener implements RoleListener { public StandardRoleListener(TaskManager taskManager, GlobalMasterInfo roleInfo) { this.taskManager = taskManager; - this.taskManager.enableRoleElection(); this.roleInfo = roleInfo; this.selfIsMaster = false; } diff --git a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/DistributedTaskScheduler.java b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/DistributedTaskScheduler.java index b4bba2ea12..7c143fb33d 100644 --- a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/DistributedTaskScheduler.java +++ b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/DistributedTaskScheduler.java @@ -19,7 +19,9 @@ import java.util.Iterator; import java.util.concurrent.Callable; +import java.util.concurrent.CancellationException; import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; @@ -48,6 +50,7 @@ import org.slf4j.Logger; public class DistributedTaskScheduler extends TaskAndResultScheduler { + private static final Logger LOG = Log.logger(DistributedTaskScheduler.class); private final long schedulePeriod; private final ExecutorService taskDbExecutor; @@ -118,6 +121,11 @@ private static boolean sleep(long ms) { public void cronSchedule() { // Perform periodic scheduling tasks + // Check closed flag first to exit early + if (this.closed.get()) { + return; + } + if (!this.graph.started() || this.graph.closed()) { return; } @@ -253,6 +261,10 @@ public Future schedule(HugeTask task) { return this.ephemeralTaskExecutor.submit(task); } + // Validate task state before saving to ensure correct exception type + E.checkState(task.type() != null, "Task type can't be null"); + E.checkState(task.name() != null, "Task name can't be null"); + // Process schema task // Handle gremlin task // Handle OLAP calculation tasks @@ -284,14 +296,41 @@ protected void initTaskParams(HugeTask task) { } } + /** + * Note: This method will update the status of the input task. + * + * @param task + * @param + */ @Override public void cancel(HugeTask task) { - // Update status to CANCELLING - if (!task.completed()) { - // Task not completed, can only execute status not CANCELLING - this.updateStatus(task.id(), null, TaskStatus.CANCELLING); + E.checkArgumentNotNull(task, "Task can't be null"); + + if (task.completed() || task.cancelling()) { + return; + } + + LOG.info("Cancel task '{}' in status {}", task.id(), task.status()); + + // Check if task is running locally, cancel it directly if so + HugeTask runningTask = this.runningTasks.get(task.id()); + if (runningTask != null) { + boolean cancelled = runningTask.cancel(true); + if (cancelled) { + task.overwriteStatus(TaskStatus.CANCELLED); + } + LOG.info("Cancel local running task '{}' result: {}", task.id(), cancelled); + return; + } + + // Task not running locally, update status to CANCELLING + // for cronSchedule() or other nodes to handle + TaskStatus currentStatus = task.status(); + if (!this.updateStatus(task.id(), currentStatus, TaskStatus.CANCELLING)) { + LOG.info("Failed to cancel task '{}', status may have changed from {}", + task.id(), currentStatus); } else { - LOG.info("cancel task({}) error, task has completed", task.id()); + task.overwriteStatus(TaskStatus.CANCELLING); } } @@ -316,14 +355,18 @@ protected HugeTask deleteFromDB(Id id) { @Override public HugeTask delete(Id id, boolean force) { - if (!force) { - // Change status to DELETING, perform the deletion operation through automatic - // scheduling. + HugeTask task = this.taskWithoutResult(id); + + if (!force && !task.completed()) { + // Check task status: can't delete running tasks without force this.updateStatus(id, null, TaskStatus.DELETING); return null; - } else { - return this.deleteFromDB(id); + // Already in DELETING status, delete directly from DB + // Completed tasks can also be deleted directly } + + // Delete from DB directly for completed/DELETING tasks or force=true + return this.deleteFromDB(id); } @Override @@ -353,6 +396,18 @@ public boolean close() { cronFuture.cancel(false); } + // Wait for cron task to complete to ensure all transactions are closed + try { + cronFuture.get(schedulePeriod + 5, TimeUnit.SECONDS); + } catch (CancellationException e) { + // Task was cancelled, this is expected + LOG.debug("Cron task was cancelled"); + } catch (TimeoutException e) { + LOG.warn("Cron task did not complete in time when closing scheduler"); + } catch (ExecutionException | InterruptedException e) { + LOG.warn("Exception while waiting for cron task to complete", e); + } + if (!this.taskDbExecutor.isShutdown()) { this.call(() -> { try { @@ -363,7 +418,10 @@ public boolean close() { this.graph.closeTx(); }); } - return true; + + //todo: serverInfoManager section should be removed in the future. + return this.serverManager().close(); + //return true; } @Override diff --git a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/HugeServerInfo.java b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/HugeServerInfo.java index 71feb3f688..f0485f6656 100644 --- a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/HugeServerInfo.java +++ b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/HugeServerInfo.java @@ -209,14 +209,6 @@ public static HugeServerInfo fromVertex(Vertex vertex) { return serverInfo; } - public boolean suitableFor(HugeTask task, long now) { - if (task.computer() != this.role.computer()) { - return false; - } - return this.updateTime.getTime() + EXPIRED_INTERVAL >= now && - this.load() + task.load() <= this.maxLoad; - } - public static Schema schema(HugeGraphParams graph) { return new Schema(graph); } diff --git a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/ServerInfoManager.java b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/ServerInfoManager.java index bcef869017..d4b0f27ad2 100644 --- a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/ServerInfoManager.java +++ b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/ServerInfoManager.java @@ -19,7 +19,6 @@ import static org.apache.hugegraph.backend.query.Query.NO_LIMIT; -import java.util.Collection; import java.util.Iterator; import java.util.Map; import java.util.concurrent.Callable; @@ -35,7 +34,6 @@ import org.apache.hugegraph.backend.query.QueryResults; import org.apache.hugegraph.backend.tx.GraphTransaction; import org.apache.hugegraph.exception.ConnectionException; -import org.apache.hugegraph.iterator.ListIterator; import org.apache.hugegraph.iterator.MapperIterator; import org.apache.hugegraph.masterelection.GlobalMasterInfo; import org.apache.hugegraph.schema.PropertyKey; @@ -64,7 +62,6 @@ public class ServerInfoManager { private volatile GlobalMasterInfo globalNodeInfo; - private volatile boolean onlySingleNode; private volatile boolean closed; public ServerInfoManager(HugeGraphParams graph, ExecutorService dbExecutor) { @@ -76,7 +73,6 @@ public ServerInfoManager(HugeGraphParams graph, ExecutorService dbExecutor) { this.globalNodeInfo = null; - this.onlySingleNode = false; this.closed = false; } @@ -115,7 +111,7 @@ public synchronized void initServerInfo(GlobalMasterInfo nodeInfo) { try { Thread.sleep(existed.expireTime() - now + 1); } catch (InterruptedException e) { - throw new HugeException("Interrupted when waiting for server info expired", e); + throw new HugeException("Interrupted when waiting for server info expired", e); } } E.checkArgument(existed == null || !existed.alive(), @@ -176,11 +172,6 @@ public boolean selfIsMaster() { return this.selfNodeRole() != null && this.selfNodeRole().master(); } - public boolean onlySingleNode() { - // Only exists one node in the whole master - return this.onlySingleNode; - } - public synchronized void heartbeat() { assert this.graphIsReady(); @@ -212,13 +203,6 @@ public synchronized void heartbeat() { assert serverInfo != null; } - public synchronized void decreaseLoad(int load) { - assert load > 0 : load; - HugeServerInfo serverInfo = this.selfServerInfo(); - serverInfo.increaseLoad(-load); - this.save(serverInfo); - } - public int calcMaxLoad() { // TODO: calc max load based on CPU and Memory resources return 10000; @@ -228,48 +212,6 @@ protected boolean graphIsReady() { return !this.closed && this.graph.started() && this.graph.initialized(); } - protected synchronized HugeServerInfo pickWorkerNode(Collection servers, - HugeTask task) { - HugeServerInfo master = null; - HugeServerInfo serverWithMinLoad = null; - int minLoad = Integer.MAX_VALUE; - boolean hasWorkerNode = false; - long now = DateUtil.now().getTime(); - - // Iterate servers to find suitable one - for (HugeServerInfo server : servers) { - if (!server.alive()) { - continue; - } - if (server.role().master()) { - master = server; - continue; - } - hasWorkerNode = true; - if (!server.suitableFor(task, now)) { - continue; - } - if (server.load() < minLoad) { - minLoad = server.load(); - serverWithMinLoad = server; - } - } - - boolean singleNode = !hasWorkerNode; - if (singleNode != this.onlySingleNode) { - LOG.info("Switch only_single_node to {}", singleNode); - this.onlySingleNode = singleNode; - } - - // Only schedule to master if there are no workers and master are suitable - if (!hasWorkerNode) { - if (master != null && master.suitableFor(task, now)) { - serverWithMinLoad = master; - } - } - return serverWithMinLoad; - } - private GraphTransaction tx() { assert Thread.currentThread().getName().contains("server-info-db-worker"); return this.graph.systemTransaction(); @@ -299,33 +241,6 @@ private Id save(HugeServerInfo serverInfo) { }); } - private int save(Collection serverInfos) { - return this.call(() -> { - if (serverInfos.isEmpty()) { - return 0; - } - HugeServerInfo.Schema schema = HugeServerInfo.schema(this.graph); - if (!schema.existVertexLabel(HugeServerInfo.P.SERVER)) { - throw new HugeException("Schema is missing for %s", HugeServerInfo.P.SERVER); - } - // Save server info in batch - GraphTransaction tx = this.tx(); - int updated = 0; - for (HugeServerInfo server : serverInfos) { - if (!server.updated()) { - continue; - } - HugeVertex vertex = tx.constructVertex(false, server.asArray()); - tx.addVertex(vertex); - updated++; - } - // NOTE: actually it is auto-commit, to be improved - tx.commitOrRollback(); - - return updated; - }); - } - private V call(Callable callable) { assert !Thread.currentThread().getName().startsWith( "server-info-db-worker") : "can't call by itself"; @@ -388,24 +303,6 @@ private HugeServerInfo removeServerInfo(Id serverId) { }); } - protected void updateServerInfos(Collection serverInfos) { - this.save(serverInfos); - } - - protected Collection allServerInfos() { - Iterator infos = this.serverInfos(NO_LIMIT, null); - try (ListIterator iter = new ListIterator<>( - MAX_SERVERS, infos)) { - return iter.list(); - } catch (Exception e) { - throw new HugeException("Failed to close server info iterator", e); - } - } - - protected Iterator serverInfos(String page) { - return this.serverInfos(ImmutableMap.of(), PAGE_SIZE, page); - } - protected Iterator serverInfos(long limit, String page) { return this.serverInfos(ImmutableMap.of(), limit, page); } diff --git a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/StandardTaskScheduler.java b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/StandardTaskScheduler.java index 5f60792af1..79dd98c0f4 100644 --- a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/StandardTaskScheduler.java +++ b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/StandardTaskScheduler.java @@ -18,7 +18,6 @@ package org.apache.hugegraph.task; import java.util.ArrayList; -import java.util.Collection; import java.util.Iterator; import java.util.List; import java.util.Map; @@ -125,11 +124,9 @@ private TaskTransaction tx() { // NOTE: only the owner thread can access task tx if (this.taskTx == null) { /* - * NOTE: don't synchronized(this) due to scheduler thread hold - * this lock through scheduleTasks(), then query tasks and wait - * for db-worker thread after call(), the tx may not be initialized - * but can't catch this lock, then cause deadlock. - * We just use this.serverManager as a monitor here + * NOTE: don't synchronized(this) to avoid potential deadlock + * when multiple threads are accessing task transaction. + * We use this.serverManager as a monitor here for thread safety. */ synchronized (this.serverManager) { if (this.taskTx == null) { @@ -146,9 +143,9 @@ private TaskTransaction tx() { @Override public void restoreTasks() { - Id selfServer = this.serverManager().selfNodeId(); List> taskList = new ArrayList<>(); // Restore 'RESTORING', 'RUNNING' and 'QUEUED' tasks in order. + // Single-node mode: restore all pending tasks without server filtering for (TaskStatus status : TaskStatus.PENDING_STATUSES) { String page = this.supportsPaging() ? PageInfo.PAGE_NONE : null; do { @@ -156,9 +153,7 @@ public void restoreTasks() { for (iter = this.findTask(status, PAGE_SIZE, page); iter.hasNext(); ) { HugeTask task = iter.next(); - if (selfServer.equals(task.server())) { - taskList.add(task); - } + taskList.add(task); } if (page != null) { page = PageInfo.pageInfo(iter); @@ -211,30 +206,9 @@ public Future schedule(HugeTask task) { return this.submitTask(task); } - // Check this is on master for normal task schedule - this.checkOnMasterNode("schedule"); - if (this.serverManager().onlySingleNode() && !task.computer()) { - /* - * Speed up for single node, submit the task immediately, - * this code can be removed without affecting code logic - */ - task.status(TaskStatus.QUEUED); - task.server(this.serverManager().selfNodeId()); - this.save(task); - return this.submitTask(task); - } else { - /* - * Just set the SCHEDULING status and save the task, - * it will be scheduled by periodic scheduler worker - */ - task.status(TaskStatus.SCHEDULING); - this.save(task); - - // Notify master server to schedule and execute immediately - TaskManager.instance().notifyNewTask(task); - - return task; - } + task.status(TaskStatus.QUEUED); + this.save(task); + return this.submitTask(task); } private Future submitTask(HugeTask task) { @@ -273,7 +247,6 @@ public void initTaskCallable(HugeTask task) { @Override public synchronized void cancel(HugeTask task) { E.checkArgumentNotNull(task, "Task can't be null"); - this.checkOnMasterNode("cancel"); if (task.completed() || task.cancelling()) { return; @@ -281,31 +254,15 @@ public synchronized void cancel(HugeTask task) { LOG.info("Cancel task '{}' in status {}", task.id(), task.status()); - if (task.server() == null) { - // The task not scheduled to workers, set canceled immediately - assert task.status().code() < TaskStatus.QUEUED.code(); - if (task.status(TaskStatus.CANCELLED)) { - this.save(task); - return; - } - } else if (task.status(TaskStatus.CANCELLING)) { - // The task scheduled to workers, let the worker node to cancel + HugeTask memTask = this.tasks.get(task.id()); + if (memTask != null) { + boolean cancelled = memTask.cancel(true); + LOG.info("Task '{}' cancel result: {}", task.id(), cancelled); + return; + } + + if (task.status(TaskStatus.CANCELLED)) { this.save(task); - assert task.server() != null : task; - assert this.serverManager().selfIsMaster(); - if (!task.server().equals(this.serverManager().selfNodeId())) { - /* - * Remove the task from memory if it's running on worker node, - * but keep the task in memory if it's running on master node. - * Cancel-scheduling will read the task from backend store, if - * removed this instance from memory, there will be two task - * instances with the same id, and can't cancel the real task that - * is running but removed from memory. - */ - this.remove(task); - } - // Notify master server to schedule and execute immediately - TaskManager.instance().notifyNewTask(task); return; } @@ -318,128 +275,11 @@ public ServerInfoManager serverManager() { return this.serverManager; } - protected synchronized void scheduleTasksOnMaster() { - // Master server schedule all scheduling tasks to suitable worker nodes - Collection serverInfos = this.serverManager().allServerInfos(); - String page = this.supportsPaging() ? PageInfo.PAGE_NONE : null; - do { - Iterator> tasks = this.tasks(TaskStatus.SCHEDULING, PAGE_SIZE, page); - while (tasks.hasNext()) { - HugeTask task = tasks.next(); - if (task.server() != null) { - // Skip if already scheduled - continue; - } - - if (!this.serverManager.selfIsMaster()) { - return; - } - - HugeServerInfo server = this.serverManager().pickWorkerNode(serverInfos, task); - if (server == null) { - LOG.info("The master can't find suitable servers to " + - "execute task '{}', wait for next schedule", task.id()); - continue; - } - - // Found suitable server, update task status - assert server.id() != null; - task.server(server.id()); - task.status(TaskStatus.SCHEDULED); - this.save(task); - - // Update server load in memory, it will be saved at the ending - server.increaseLoad(task.load()); - - LOG.info("Scheduled task '{}' to server '{}'", task.id(), server.id()); - } - if (page != null) { - page = PageInfo.pageInfo(tasks); - } - } while (page != null); - - // Save to store - this.serverManager().updateServerInfos(serverInfos); - } - - protected void executeTasksOnWorker(Id server) { - String page = this.supportsPaging() ? PageInfo.PAGE_NONE : null; - do { - Iterator> tasks = this.tasks(TaskStatus.SCHEDULED, PAGE_SIZE, page); - while (tasks.hasNext()) { - HugeTask task = tasks.next(); - this.initTaskCallable(task); - Id taskServer = task.server(); - if (taskServer == null) { - LOG.warn("Task '{}' may not be scheduled", task.id()); - continue; - } - HugeTask memTask = this.tasks.get(task.id()); - if (memTask != null) { - assert memTask.status().code() > task.status().code(); - continue; - } - if (taskServer.equals(server)) { - task.status(TaskStatus.QUEUED); - this.save(task); - this.submitTask(task); - } - } - if (page != null) { - page = PageInfo.pageInfo(tasks); - } - } while (page != null); - } - - protected void cancelTasksOnWorker(Id server) { - String page = this.supportsPaging() ? PageInfo.PAGE_NONE : null; - do { - Iterator> tasks = this.tasks(TaskStatus.CANCELLING, PAGE_SIZE, page); - while (tasks.hasNext()) { - HugeTask task = tasks.next(); - Id taskServer = task.server(); - if (taskServer == null) { - LOG.warn("Task '{}' may not be scheduled", task.id()); - continue; - } - if (!taskServer.equals(server)) { - continue; - } - /* - * Task may be loaded from backend store and not initialized. - * like: A task is completed but failed to save in the last - * step, resulting in the status of the task not being - * updated to storage, the task is not in memory, so it's not - * initialized when canceled. - */ - HugeTask memTask = this.tasks.get(task.id()); - if (memTask != null) { - task = memTask; - } else { - this.initTaskCallable(task); - } - boolean cancelled = task.cancel(true); - LOG.info("Server '{}' cancel task '{}' with cancelled={}", - server, task.id(), cancelled); - } - if (page != null) { - page = PageInfo.pageInfo(tasks); - } - } while (page != null); - } - @Override public void taskDone(HugeTask task) { this.remove(task); - - Id selfServerId = this.serverManager().selfNodeId(); - try { - this.serverManager().decreaseLoad(task.load()); - } catch (Throwable e) { - LOG.error("Failed to decrease load for task '{}' on server '{}'", - task.id(), selfServerId, e); - } - LOG.debug("Task '{}' done on server '{}'", task.id(), selfServerId); + // Single-node mode: no need to manage load + LOG.debug("Task '{}' done", task.id()); } protected void remove(HugeTask task) { @@ -738,10 +578,9 @@ public V call(Callable callable) { } } + @Deprecated private void checkOnMasterNode(String op) { - if (!this.serverManager().selfIsMaster()) { - throw new HugeException("Can't %s task on non-master server", op); - } + // Single-node mode: all operations are allowed, no role check needed } private boolean supportsPaging() { diff --git a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/TaskAndResultScheduler.java b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/TaskAndResultScheduler.java index 2ba3fd8a6d..6c99ef156d 100644 --- a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/TaskAndResultScheduler.java +++ b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/TaskAndResultScheduler.java @@ -46,6 +46,7 @@ * Base class of task & result scheduler */ public abstract class TaskAndResultScheduler implements TaskScheduler { + /** * Which graph the scheduler belongs to */ @@ -61,8 +62,8 @@ public abstract class TaskAndResultScheduler implements TaskScheduler { private final ServerInfoManager serverManager; public TaskAndResultScheduler( - HugeGraphParams graph, - ExecutorService serverInfoDbExecutor) { + HugeGraphParams graph, + ExecutorService serverInfoDbExecutor) { E.checkNotNull(graph, "graph"); this.graph = graph; @@ -90,7 +91,7 @@ public void save(HugeTask task) { // Save result outcome if (rawResult != null) { HugeTaskResult result = - new HugeTaskResult(HugeTaskResult.genId(task.id())); + new HugeTaskResult(HugeTaskResult.genId(task.id())); result.result(rawResult); this.call(() -> { @@ -164,7 +165,7 @@ protected Iterator> queryTask(Map conditions, } Iterator vertices = this.tx().queryTaskInfos(query); Iterator> tasks = - new MapperIterator<>(vertices, HugeTask::fromVertex); + new MapperIterator<>(vertices, HugeTask::fromVertex); // Convert iterator to list to avoid across thread tx accessed return QueryResults.toList(tasks); }); @@ -180,16 +181,16 @@ protected Iterator> queryTask(Map conditions, protected Iterator> queryTask(List ids) { ListIterator> ts = this.call( - () -> { - Object[] idArray = ids.toArray(new Id[ids.size()]); - Iterator vertices = this.tx() - .queryTaskInfos(idArray); - Iterator> tasks = - new MapperIterator<>(vertices, - HugeTask::fromVertex); - // Convert iterator to list to avoid across thread tx accessed - return QueryResults.toList(tasks); - }); + () -> { + Object[] idArray = ids.toArray(new Id[ids.size()]); + Iterator vertices = this.tx() + .queryTaskInfos(idArray); + Iterator> tasks = + new MapperIterator<>(vertices, + HugeTask::fromVertex); + // Convert iterator to list to avoid across thread tx accessed + return QueryResults.toList(tasks); + }); Iterator results = queryTaskResult(ids); @@ -201,7 +202,7 @@ protected Iterator> queryTask(List ids) { return new MapperIterator<>(ts, (task) -> { HugeTaskResult taskResult = - resultCaches.get(HugeTaskResult.genId(task.id())); + resultCaches.get(HugeTaskResult.genId(task.id())); if (taskResult != null) { task.result(taskResult); } @@ -219,6 +220,10 @@ protected HugeTask taskWithoutResult(Id id) { return HugeTask.fromVertex(vertex); }); + if (result == null) { + throw new NotFoundException("Can't find task with id '%s'", id); + } + return result; } @@ -227,7 +232,7 @@ protected Iterator> tasksWithoutResult(List ids) { Object[] idArray = ids.toArray(new Id[ids.size()]); Iterator vertices = this.tx().queryTaskInfos(idArray); Iterator> tasks = - new MapperIterator<>(vertices, HugeTask::fromVertex); + new MapperIterator<>(vertices, HugeTask::fromVertex); // Convert iterator to list to avoid across thread tx accessed return QueryResults.toList(tasks); }); @@ -250,7 +255,7 @@ protected Iterator> queryTaskWithoutResult(String key, } protected Iterator> queryTaskWithoutResult(Map conditions, long limit, String page) { + Object> conditions, long limit, String page) { return this.call(() -> { ConditionQuery query = new ConditionQuery(HugeType.TASK); if (page != null) { @@ -268,7 +273,7 @@ protected Iterator> queryTaskWithoutResult(Map vertices = this.tx().queryTaskInfos(query); Iterator> tasks = - new MapperIterator<>(vertices, HugeTask::fromVertex); + new MapperIterator<>(vertices, HugeTask::fromVertex); // Convert iterator to list to avoid across thread tx accessed return QueryResults.toList(tasks); }); @@ -277,7 +282,7 @@ protected Iterator> queryTaskWithoutResult(Map { Iterator vertices = - this.tx().queryTaskInfos(HugeTaskResult.genId(taskid)); + this.tx().queryTaskInfos(HugeTaskResult.genId(taskid)); Vertex vertex = QueryResults.one(vertices); if (vertex == null) { return null; @@ -292,12 +297,12 @@ protected HugeTaskResult queryTaskResult(Id taskid) { protected Iterator queryTaskResult(List taskIds) { return this.call(() -> { Object[] idArray = - taskIds.stream().map(HugeTaskResult::genId).toArray(); + taskIds.stream().map(HugeTaskResult::genId).toArray(); Iterator vertices = this.tx() .queryTaskInfos(idArray); Iterator tasks = - new MapperIterator<>(vertices, - HugeTaskResult::fromVertex); + new MapperIterator<>(vertices, + HugeTaskResult::fromVertex); // Convert iterator to list to avoid across thread tx accessed return QueryResults.toList(tasks); }); diff --git a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/TaskManager.java b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/TaskManager.java index 277822a386..9ce9762743 100644 --- a/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/TaskManager.java +++ b/hugegraph-server/hugegraph-core/src/main/java/org/apache/hugegraph/task/TaskManager.java @@ -18,7 +18,6 @@ package org.apache.hugegraph.task; import java.util.Map; -import java.util.Queue; import java.util.concurrent.Callable; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ExecutorService; @@ -33,7 +32,6 @@ import org.apache.hugegraph.util.Consumers; import org.apache.hugegraph.util.E; import org.apache.hugegraph.util.ExecutorUtil; -import org.apache.hugegraph.util.LockUtil; import org.apache.hugegraph.util.Log; import org.slf4j.Logger; @@ -76,8 +74,6 @@ public final class TaskManager { private final ExecutorService ephemeralTaskExecutor; private final PausableScheduledThreadPool distributedSchedulerExecutor; - private boolean enableRoleElected = false; - public static TaskManager instance() { return MANAGER; } @@ -102,11 +98,6 @@ private TaskManager(int pool) { // For a schedule task to run, just one thread is ok this.schedulerExecutor = ExecutorUtil.newPausableScheduledThreadPool( 1, TASK_SCHEDULER); - // Start after 10x period time waiting for HugeGraphServer startup - this.schedulerExecutor.scheduleWithFixedDelay(this::scheduleOrExecuteJob, - 10 * SCHEDULE_PERIOD, - SCHEDULE_PERIOD, - TimeUnit.MILLISECONDS); } public void addScheduler(HugeGraphParams graph) { @@ -230,14 +221,6 @@ private void closeDistributedSchedulerTx(HugeGraphParams graph) { } } - public void pauseScheduledThreadPool() { - this.schedulerExecutor.pauseSchedule(); - } - - public void resumeScheduledThreadPool() { - this.schedulerExecutor.resumeSchedule(); - } - public TaskScheduler getScheduler(HugeGraphParams graph) { return this.schedulers.get(graph); } @@ -349,10 +332,6 @@ public int pendingTasks() { return size; } - public void enableRoleElection() { - this.enableRoleElected = true; - } - public void onAsRoleMaster() { try { for (TaskScheduler entry : this.schedulers.values()) { @@ -385,91 +364,6 @@ public void onAsRoleWorker() { } } - void notifyNewTask(HugeTask task) { - Queue queue = this.schedulerExecutor - .getQueue(); - if (queue.size() <= 1) { - /* - * Notify to schedule tasks initiatively when have new task - * It's OK to not notify again if there are more than one task in - * queue(like two, one is timer task, one is immediate task), - * we don't want too many immediate tasks to be inserted into queue, - * one notify will cause all the tasks to be processed. - */ - this.schedulerExecutor.submit(this::scheduleOrExecuteJob); - } - } - - private void scheduleOrExecuteJob() { - // Called by scheduler timer - try { - for (TaskScheduler entry : this.schedulers.values()) { - // Maybe other threads close&remove scheduler at the same time - synchronized (entry) { - this.scheduleOrExecuteJobForGraph(entry); - } - } - } catch (Throwable e) { - LOG.error("Exception occurred when schedule job", e); - } - } - - private void scheduleOrExecuteJobForGraph(TaskScheduler scheduler) { - E.checkNotNull(scheduler, "scheduler"); - - if (scheduler instanceof StandardTaskScheduler) { - StandardTaskScheduler standardTaskScheduler = (StandardTaskScheduler) (scheduler); - ServerInfoManager serverManager = scheduler.serverManager(); - String spaceGraphName = scheduler.spaceGraphName(); - - LockUtil.lock(spaceGraphName, LockUtil.GRAPH_LOCK); - try { - /* - * Skip if: - * graph is closed (iterate schedulers before graph is closing) - * or - * graph is not initialized(maybe truncated or cleared). - * - * If graph is closing by other thread, current thread get - * serverManager and try lock graph, at the same time other - * thread deleted the lock-group, current thread would get - * exception 'LockGroup xx does not exists'. - * If graph is closed, don't call serverManager.initialized() - * due to it will reopen graph tx. - */ - if (!serverManager.graphIsReady()) { - return; - } - - // Update server heartbeat - serverManager.heartbeat(); - - /* - * Master will schedule tasks to suitable servers. - * Note a Worker may become to a Master, so elected-Master also needs to - * execute tasks assigned by previous Master when enableRoleElected=true. - * However, when enableRoleElected=false, a Master is only set by the - * config assignment, assigned-Master always stays the same state. - */ - if (serverManager.selfIsMaster()) { - standardTaskScheduler.scheduleTasksOnMaster(); - if (!this.enableRoleElected && !serverManager.onlySingleNode()) { - // assigned-Master + non-single-node don't need to execute tasks - return; - } - } - - // Execute queued tasks scheduled to current server - standardTaskScheduler.executeTasksOnWorker(serverManager.selfNodeId()); - - // Cancel tasks scheduled to current server - standardTaskScheduler.cancelTasksOnWorker(serverManager.selfNodeId()); - } finally { - LockUtil.unlock(spaceGraphName, LockUtil.GRAPH_LOCK); - } - } - } - private static final ThreadLocal CONTEXTS = new ThreadLocal<>(); public static void setContext(String context) { diff --git a/hugegraph-server/hugegraph-dist/docker/README.md b/hugegraph-server/hugegraph-dist/docker/README.md index 454d4ca24d..20c8565b80 100644 --- a/hugegraph-server/hugegraph-dist/docker/README.md +++ b/hugegraph-server/hugegraph-dist/docker/README.md @@ -40,7 +40,7 @@ If you want to customize the preloaded data, please mount the groovy scripts (no 2. Using docker compose - We can also use `docker-compose up -d` to quickly start. The `docker-compose.yaml` is below. [example.groovy](https://github.com/apache/hugegraph/blob/master/hugegraph-server/hugegraph-dist/src/assembly/static/scripts/example.groovy) is a pre-defined script. If needed, we can mount a new `example.groovy` to preload different data: + We can also use `docker-compose up -d` to quickly start. The `docker-compose.yaml` is below. [example.groovy](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-server/hugegraph-dist/src/assembly/static/scripts/example.groovy) is a pre-defined script. If needed, we can mount a new `example.groovy` to preload different data: ```yaml version: '3' diff --git a/hugegraph-server/hugegraph-dist/docker/docker-entrypoint.sh b/hugegraph-server/hugegraph-dist/docker/docker-entrypoint.sh old mode 100755 new mode 100644 index b40886e040..60cd4bc163 --- a/hugegraph-server/hugegraph-dist/docker/docker-entrypoint.sh +++ b/hugegraph-server/hugegraph-dist/docker/docker-entrypoint.sh @@ -15,78 +15,32 @@ # See the License for the specific language governing permissions and # limitations under the License. # -set -euo pipefail -DOCKER_FOLDER="./docker" -INIT_FLAG_FILE="init_complete" -GRAPH_CONF="./conf/graphs/hugegraph.properties" - -mkdir -p "${DOCKER_FOLDER}" - -log() { echo "[hugegraph-server-entrypoint] $*"; } - -set_prop() { - local key="$1" val="$2" file="$3" - local esc_key esc_val - - esc_key=$(printf '%s' "$key" | sed -e 's/[][(){}.^$*+?|\\/]/\\&/g') - esc_val=$(printf '%s' "$val" | sed -e 's/[&|\\]/\\&/g') +# create a folder to save the docker-related file +DOCKER_FOLDER='./docker' +mkdir -p $DOCKER_FOLDER - if grep -qE "^[[:space:]]*${esc_key}[[:space:]]*=" "${file}"; then - sed -ri "s|^([[:space:]]*${esc_key}[[:space:]]*=).*|\\1${esc_val}|" "${file}" - else - printf '%s=%s\n' "$key" "$val" >> "${file}" - fi -} - -migrate_env() { - local old_name="$1" new_name="$2" - - if [[ -n "${!old_name:-}" && -z "${!new_name:-}" ]]; then - log "WARN: deprecated env '${old_name}' detected; mapping to '${new_name}'" - export "${new_name}=${!old_name}" - fi -} - -migrate_env "BACKEND" "HG_SERVER_BACKEND" -migrate_env "PD_PEERS" "HG_SERVER_PD_PEERS" - -# ── Map env → properties file ───────────────────────────────────────── -[[ -n "${HG_SERVER_BACKEND:-}" ]] && set_prop "backend" "${HG_SERVER_BACKEND}" "${GRAPH_CONF}" -[[ -n "${HG_SERVER_PD_PEERS:-}" ]] && set_prop "pd.peers" "${HG_SERVER_PD_PEERS}" "${GRAPH_CONF}" - -# ── Build wait-storage env ───────────────────────────────────────────── -WAIT_ENV=() -[[ -n "${HG_SERVER_BACKEND:-}" ]] && WAIT_ENV+=("hugegraph.backend=${HG_SERVER_BACKEND}") -[[ -n "${HG_SERVER_PD_PEERS:-}" ]] && WAIT_ENV+=("hugegraph.pd.peers=${HG_SERVER_PD_PEERS}") - -# ── Init store (once) ───────────────────────────────────────────────── -if [[ ! -f "${DOCKER_FOLDER}/${INIT_FLAG_FILE}" ]]; then - if (( ${#WAIT_ENV[@]} > 0 )); then - env "${WAIT_ENV[@]}" ./bin/wait-storage.sh - else - ./bin/wait-storage.sh - fi +INIT_FLAG_FILE="init_complete" - if [[ -z "${PASSWORD:-}" ]]; then - log "init hugegraph with non-auth mode" +if [ ! -f "${DOCKER_FOLDER}/${INIT_FLAG_FILE}" ]; then + # wait for storage backend + ./bin/wait-storage.sh + if [ -z "$PASSWORD" ]; then + echo "init hugegraph with non-auth mode" ./bin/init-store.sh else - log "init hugegraph with auth mode" + echo "init hugegraph with auth mode" ./bin/enable-auth.sh - echo "${PASSWORD}" | ./bin/init-store.sh + echo "$PASSWORD" | ./bin/init-store.sh fi - touch "${DOCKER_FOLDER}/${INIT_FLAG_FILE}" + # create a flag file to avoid re-init when restarting + touch ${DOCKER_FOLDER}/${INIT_FLAG_FILE} else - log "HugeGraph initialization already done. Skipping re-init..." + echo "Hugegraph Initialization already done. Skipping re-init..." fi -STORE_REST="${STORE_REST:-store:8520}" -export STORE_REST - -./bin/start-hugegraph.sh -j "${JAVA_OPTS:-}" -t 120 - -# Post-startup cluster stabilization check -./bin/wait-partition.sh || log "WARN: partitions not assigned yet" +# start hugegraph-server +# remove "-g zgc" now, which is only available on ARM-Mac with java > 13 +./bin/start-hugegraph.sh -j "$JAVA_OPTS" tail -f /dev/null diff --git a/hugegraph-server/hugegraph-dist/pom.xml b/hugegraph-server/hugegraph-dist/pom.xml index 1fe20b6d9b..324d253dc5 100644 --- a/hugegraph-server/hugegraph-dist/pom.xml +++ b/hugegraph-server/hugegraph-dist/pom.xml @@ -356,7 +356,7 @@ - + !skip-tar-package diff --git a/hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-storage.sh b/hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-storage.sh old mode 100755 new mode 100644 index d4e9e278f0..556e022478 --- a/hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-storage.sh +++ b/hugegraph-server/hugegraph-dist/src/assembly/static/bin/wait-storage.sh @@ -29,15 +29,11 @@ function abs_path() { BIN=$(abs_path) TOP="$(cd "$BIN"/../ && pwd)" GRAPH_CONF="$TOP/conf/graphs/hugegraph.properties" -WAIT_STORAGE_TIMEOUT_S=300 +WAIT_STORAGE_TIMEOUT_S=120 +DETECT_STORAGE="$TOP/scripts/detect-storage.groovy" . "$BIN"/util.sh -log() { - echo "[wait-storage] $1" -} - -PD_AUTH_ARGS="-u ${PD_AUTH_USER:-store}:${PD_AUTH_PASSWORD:-admin}" function key_exists { local key=$1 @@ -74,58 +70,7 @@ done < <(env | sort -r | awk -F= '{ st = index($0, "="); print $1 " " substr($0, # wait for storage if env | grep '^hugegraph\.' > /dev/null; then if [ -n "${WAIT_STORAGE_TIMEOUT_S:-}" ]; then - - PD_PEERS="${hugegraph_pd_peers:-}" - if [ -z "$PD_PEERS" ]; then - PD_PEERS=$(grep -E "^\s*pd\.peers\s*=" "$GRAPH_CONF" | sed 's/.*=\s*//' | tr -d ' ') - fi - - if [ -n "$PD_PEERS" ]; then - : "${HG_SERVER_PD_REST_ENDPOINT:=}" - - if [ -n "${HG_SERVER_PD_REST_ENDPOINT}" ]; then - PD_REST_LIST="${HG_SERVER_PD_REST_ENDPOINT}" - else - PD_REST_LIST=$(echo "$PD_PEERS" | sed 's/:8686/:8620/g') - fi - - export PD_REST_LIST - log "PD REST peers = $PD_REST_LIST" - log "Timeout = ${WAIT_STORAGE_TIMEOUT_S}s" - - timeout "${WAIT_STORAGE_TIMEOUT_S}s" bash -c " - - log() { echo '[wait-storage] '\"\$1\"; } - - check_any_pd() { - for peer in \$(echo \"\$PD_REST_LIST\" | tr ',' ' '); do - if curl ${PD_AUTH_ARGS} -f -s http://\${peer}/v1/health >/dev/null 2>&1; then - echo \"\$peer\" - return 0 - fi - done - return 1 - } - - until PD_REST=\$(check_any_pd); do - log 'No PD peer ready yet, retrying in 5s' - sleep 5 - done - log \"PD health check PASSED via \$PD_REST\" - - until curl ${PD_AUTH_ARGS} -f -s \ - http://\${PD_REST}/v1/stores 2>/dev/null | \ - grep -qi '\"state\"[[:space:]]*:[[:space:]]*\"Up\"'; do - log 'No Up store yet, retrying in 5s' - sleep 5 - done - - log 'Store registration check PASSED' - log 'Storage backend is VIABLE' - " || { echo "[wait-storage] ERROR: Timeout waiting for storage backend"; exit 1; } - - else - log "No pd.peers configured, skipping storage wait" - fi + timeout "${WAIT_STORAGE_TIMEOUT_S}s" bash -c \ + "until bin/gremlin-console.sh -- -e $DETECT_STORAGE > /dev/null 2>&1; do echo \"Hugegraph server are waiting for storage backend...\"; sleep 5; done" fi fi diff --git a/hugegraph-server/hugegraph-dist/src/assembly/static/conf/graphs/hstore.properties.template b/hugegraph-server/hugegraph-dist/src/assembly/static/conf/graphs/hstore.properties.template index d3834baf5c..fd2782a87d 100644 --- a/hugegraph-server/hugegraph-dist/src/assembly/static/conf/graphs/hstore.properties.template +++ b/hugegraph-server/hugegraph-dist/src/assembly/static/conf/graphs/hstore.properties.template @@ -31,7 +31,6 @@ store=hugegraph pd.peers=127.0.0.1:8686 # task config -task.scheduler_type=local task.schedule_period=10 task.retry=0 task.wait_timeout=10 diff --git a/hugegraph-server/hugegraph-dist/src/assembly/static/conf/graphs/hugegraph.properties b/hugegraph-server/hugegraph-dist/src/assembly/static/conf/graphs/hugegraph.properties index b77cacb2de..3727919bbb 100644 --- a/hugegraph-server/hugegraph-dist/src/assembly/static/conf/graphs/hugegraph.properties +++ b/hugegraph-server/hugegraph-dist/src/assembly/static/conf/graphs/hugegraph.properties @@ -30,7 +30,6 @@ store=hugegraph #pd.peers=127.0.0.1:8686 # task config -task.scheduler_type=local task.schedule_period=10 task.retry=0 task.wait_timeout=10 diff --git a/hugegraph-server/hugegraph-dist/src/assembly/static/conf/rest-server.properties b/hugegraph-server/hugegraph-dist/src/assembly/static/conf/rest-server.properties index ad3e2700f8..eba2ed1f5d 100644 --- a/hugegraph-server/hugegraph-dist/src/assembly/static/conf/rest-server.properties +++ b/hugegraph-server/hugegraph-dist/src/assembly/static/conf/rest-server.properties @@ -23,9 +23,6 @@ arthas.disabled_commands=jad #auth.admin_pa=pa #auth.graph_store=hugegraph -# lightweight load balancing (TODO: legacy mode, remove soon) -server.id=server-1 -server.role=master # use pd # usePD=true diff --git a/hugegraph-server/hugegraph-dist/src/assembly/travis/install-cassandra.sh b/hugegraph-server/hugegraph-dist/src/assembly/travis/install-cassandra.sh index 86b22f2e0d..629a4779ea 100755 --- a/hugegraph-server/hugegraph-dist/src/assembly/travis/install-cassandra.sh +++ b/hugegraph-server/hugegraph-dist/src/assembly/travis/install-cassandra.sh @@ -17,7 +17,7 @@ # set -ev -CASS_DOWNLOAD_ADDRESS="https://archive.apache.org/dist/cassandra" +CASS_DOWNLOAD_ADDRESS="http://archive.apache.org/dist/cassandra" CASS_VERSION="4.0.10" CASS_PACKAGE="apache-cassandra-${CASS_VERSION}" CASS_TAR="${CASS_PACKAGE}-bin.tar.gz" diff --git a/hugegraph-server/hugegraph-dist/src/assembly/travis/install-hbase.sh b/hugegraph-server/hugegraph-dist/src/assembly/travis/install-hbase.sh index 8317ba86d6..ae40b284ea 100755 --- a/hugegraph-server/hugegraph-dist/src/assembly/travis/install-hbase.sh +++ b/hugegraph-server/hugegraph-dist/src/assembly/travis/install-hbase.sh @@ -18,7 +18,7 @@ set -ev TRAVIS_DIR=$(dirname $0) -HBASE_DOWNLOAD_ADDRESS="https://archive.apache.org/dist/hbase" +HBASE_DOWNLOAD_ADDRESS="http://archive.apache.org/dist/hbase" HBASE_VERSION="2.0.2" HBASE_PACKAGE="hbase-${HBASE_VERSION}" HBASE_TAR="${HBASE_PACKAGE}-bin.tar.gz" diff --git a/hugegraph-server/hugegraph-dist/src/assembly/travis/run-api-test-for-raft.sh b/hugegraph-server/hugegraph-dist/src/assembly/travis/run-api-test-for-raft.sh index a48894728e..2b998d57aa 100755 --- a/hugegraph-server/hugegraph-dist/src/assembly/travis/run-api-test-for-raft.sh +++ b/hugegraph-server/hugegraph-dist/src/assembly/travis/run-api-test-for-raft.sh @@ -23,7 +23,7 @@ REPORT_FILE=$REPORT_DIR/jacoco-api-test.xml TRAVIS_DIR=$(dirname $0) VERSION=$(mvn help:evaluate -Dexpression=project.version -q -DforceStdout) -SERVER_DIR=hugegraph-server/apache-hugegraph-server-$VERSION +SERVER_DIR=hugegraph-server/apache-hugegraph-server-incubating-$VERSION RAFT1_DIR=hugegraph-raft1 RAFT2_DIR=hugegraph-raft2 RAFT3_DIR=hugegraph-raft3 diff --git a/hugegraph-server/hugegraph-dist/src/assembly/travis/run-api-test.sh b/hugegraph-server/hugegraph-dist/src/assembly/travis/run-api-test.sh index 14ea659527..3bf0d2d9ea 100755 --- a/hugegraph-server/hugegraph-dist/src/assembly/travis/run-api-test.sh +++ b/hugegraph-server/hugegraph-dist/src/assembly/travis/run-api-test.sh @@ -23,7 +23,7 @@ REPORT_FILE=$REPORT_DIR/jacoco-api-test-for-raft.xml TRAVIS_DIR=$(dirname $0) VERSION=$(mvn help:evaluate -Dexpression=project.version -q -DforceStdout) -SERVER_DIR=hugegraph-server/apache-hugegraph-server-$VERSION/ +SERVER_DIR=hugegraph-server/apache-hugegraph-server-incubating-$VERSION/ CONF=$SERVER_DIR/conf/graphs/hugegraph.properties REST_SERVER_CONF=$SERVER_DIR/conf/rest-server.properties GREMLIN_SERVER_CONF=$SERVER_DIR/conf/gremlin-server.yaml @@ -34,8 +34,8 @@ mvn package -Dmaven.test.skip=true -ntp # add mysql dependency wget -P $SERVER_DIR/lib/ https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar -if [[ ! -e "$SERVER_DIR/lib/ikanalyzer-2012_u6.jar" ]]; then - wget -P $SERVER_DIR/lib/ https://raw.githubusercontent.com/apache/hugegraph-doc/ik_binary/dist/server/ikanalyzer-2012_u6.jar +if [[ ! -e "$SERVER_DIR/ikanalyzer-2012_u6.jar" ]]; then + wget -P $SERVER_DIR/lib/ https://raw.githubusercontent.com/apache/incubator-hugegraph-doc/ik_binary/dist/server/ikanalyzer-2012_u6.jar fi # config rest-server diff --git a/hugegraph-server/hugegraph-dist/src/assembly/travis/start-pd.sh b/hugegraph-server/hugegraph-dist/src/assembly/travis/start-pd.sh index 35e82ade40..bab4adcc8d 100755 --- a/hugegraph-server/hugegraph-dist/src/assembly/travis/start-pd.sh +++ b/hugegraph-server/hugegraph-dist/src/assembly/travis/start-pd.sh @@ -29,7 +29,7 @@ else exit 1 fi -PD_DIR=$HOME_DIR/hugegraph-pd/apache-hugegraph-pd-$VersionInBash +PD_DIR=$HOME_DIR/hugegraph-pd/apache-hugegraph-pd-incubating-$VersionInBash pushd $PD_DIR . bin/start-hugegraph-pd.sh diff --git a/hugegraph-server/hugegraph-dist/src/assembly/travis/start-store.sh b/hugegraph-server/hugegraph-dist/src/assembly/travis/start-store.sh index 3e876ce9a0..8882df3a8e 100755 --- a/hugegraph-server/hugegraph-dist/src/assembly/travis/start-store.sh +++ b/hugegraph-server/hugegraph-dist/src/assembly/travis/start-store.sh @@ -29,7 +29,7 @@ else exit 1 fi -STORE_DIR=$HOME_DIR/hugegraph-store/apache-hugegraph-store-$VersionInBash +STORE_DIR=$HOME_DIR/hugegraph-store/apache-hugegraph-store-incubating-$VersionInBash pushd $STORE_DIR . bin/start-hugegraph-store.sh diff --git a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/ApiTestSuite.java b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/ApiTestSuite.java index 1fe8fc45fa..07eb608adf 100644 --- a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/ApiTestSuite.java +++ b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/ApiTestSuite.java @@ -42,8 +42,6 @@ CypherApiTest.class, ArthasApiTest.class, GraphSpaceApiTest.class, - GraphSpaceApiStandaloneTest.class, - ManagerApiStandaloneTest.class, }) public class ApiTestSuite { diff --git a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/BaseApiTest.java b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/BaseApiTest.java index a6f84de33e..f88c134abd 100644 --- a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/BaseApiTest.java +++ b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/BaseApiTest.java @@ -39,7 +39,6 @@ import org.junit.After; import org.junit.AfterClass; import org.junit.Assert; -import org.junit.Assume; import org.junit.BeforeClass; import com.fasterxml.jackson.databind.JavaType; @@ -62,9 +61,9 @@ public class BaseApiTest { protected static final String BASE_URL = "http://127.0.0.1:8080"; private static final String GRAPH = "hugegraph"; private static final String GRAPHSPACE = "DEFAULT"; + private static final String USERNAME = "admin"; protected static final String URL_PREFIX = "graphspaces/" + GRAPHSPACE + "/graphs/" + GRAPH; protected static final String TRAVERSERS_API = URL_PREFIX + "/traversers"; - private static final String USERNAME = "admin"; private static final String PASSWORD = "pa"; private static final int NO_LIMIT = -1; private static final String SCHEMA_PKS = "/schema/propertykeys"; @@ -74,8 +73,6 @@ public class BaseApiTest { private static final String GRAPH_VERTEX = "/graph/vertices"; private static final String GRAPH_EDGE = "/graph/edges"; private static final String BATCH = "/batch"; - static final String STANDALONE_ERROR = - "GraphSpace management is not supported in standalone mode"; private static final String ROCKSDB_CONFIG_TEMPLATE = "{ \"gremlin.graph\": \"org.apache.hugegraph.HugeFactory\"," + @@ -85,9 +82,11 @@ public class BaseApiTest { "\"rocksdb.wal_path\": \"rocksdbtest-data-%s\"," + "\"search.text_analyzer\": \"jieba\"," + "\"search.text_analyzer_mode\": \"INDEX\" }"; - private static final ObjectMapper MAPPER = new ObjectMapper(); + protected static RestClient client; + private static final ObjectMapper MAPPER = new ObjectMapper(); + @BeforeClass public static void init() { client = newClient(); @@ -100,10 +99,19 @@ public static void clear() throws Exception { client = null; } + @After + public void teardown() throws Exception { + BaseApiTest.clearData(); + } + public static String baseUrl() { return BASE_URL; } + public RestClient client() { + return client; + } + public static RestClient newClient() { return new RestClient(BASE_URL); } @@ -185,8 +193,7 @@ protected static void waitTaskStatus(int task, Set expectedStatus) { Assert.fail(String.format("Failed to wait for task %s " + "due to timeout", task)); } - } - while (!expectedStatus.contains(status)); + } while (!expectedStatus.contains(status)); } protected static void initVertexLabel() { @@ -741,30 +748,6 @@ public static RestClient analystClient(String graphSpace, String username) { return analystClient; } - /** - * Skips the current test when the server backend is not known to be in - * standalone mode. Treats both {@code "hstore"} and {@code null} - * (i.e. the backend property is not provided/unknown) as PD/distributed - * mode and skips the test for safety. - * Call this from a {@code @Before} method in standalone-only test classes. - */ - public static void assumeStandaloneMode() { - String backend = System.getProperty("backend"); - boolean isPdMode = backend == null || "hstore".equals(backend); - Assume.assumeFalse( - "Skip when backend is hstore (PD/distributed) or not specified", - isPdMode); - } - - @After - public void teardown() throws Exception { - BaseApiTest.clearData(); - } - - public RestClient client() { - return client; - } - public static class RestClient { private final Client client; diff --git a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/GraphSpaceApiStandaloneTest.java b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/GraphSpaceApiStandaloneTest.java deleted file mode 100644 index 21e3975a21..0000000000 --- a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/GraphSpaceApiStandaloneTest.java +++ /dev/null @@ -1,91 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hugegraph.api; - -import com.google.common.collect.ImmutableMap; - -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; - -import jakarta.ws.rs.core.Response; - -/** - * Tests that GraphSpaceAPI returns a friendly HTTP 400 error in standalone - * mode (i.e. when the server is started without PD / hstore backend). - *

- * This class intentionally does NOT have a class-level Assume guard so that - * the tests are actually executed in non-hstore CI runs. - */ -public class GraphSpaceApiStandaloneTest extends BaseApiTest { - - private static final String PATH = "graphspaces"; - - @Before - public void skipForPdMode() { - assumeStandaloneMode(); - } - - @Test - public void testProfileReturnsFriendlyError() { - Response r = this.client().get(PATH + "/profile"); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testListReturnsFriendlyError() { - Response r = this.client().get(PATH); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testGetReturnsFriendlyError() { - Response r = this.client().get(PATH, "DEFAULT"); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testCreateReturnsFriendlyError() { - String body = "{\"name\":\"test_standalone\",\"nickname\":\"test\"," - + "\"description\":\"test\",\"cpu_limit\":10," - + "\"memory_limit\":10,\"storage_limit\":10," - + "\"max_graph_number\":10,\"max_role_number\":10," - + "\"auth\":false,\"configs\":{}}"; - Response r = this.client().post(PATH, body); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testManageReturnsFriendlyError() { - String body = "{\"action\":\"update\",\"update\":{\"name\":\"DEFAULT\"}}"; - Response r = this.client().put(PATH, "DEFAULT", body, ImmutableMap.of()); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testDeleteReturnsFriendlyError() { - Response r = this.client().delete(PATH, "nonexistent"); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } -} diff --git a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/GraphSpaceApiTest.java b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/GraphSpaceApiTest.java index 1c3eb77995..01782e7e01 100644 --- a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/GraphSpaceApiTest.java +++ b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/GraphSpaceApiTest.java @@ -42,7 +42,7 @@ public void removeSpaces() { Response r = this.client().get(PATH); String result = r.readEntity(String.class); Map resultMap = JsonUtil.fromJson(result, Map.class); - List spaces = (List)resultMap.get("graphSpaces"); + List spaces = (List) resultMap.get("graphSpaces"); for (String space : spaces) { if (!"DEFAULT".equals(space)) { this.client().delete(PATH, space); diff --git a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/ManagerApiStandaloneTest.java b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/ManagerApiStandaloneTest.java deleted file mode 100644 index 4c0e17815b..0000000000 --- a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/ManagerApiStandaloneTest.java +++ /dev/null @@ -1,128 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hugegraph.api; - -import java.util.Map; - -import org.apache.hugegraph.auth.HugePermission; -import org.junit.Assert; -import org.junit.Before; -import org.junit.Test; - -import jakarta.ws.rs.core.Response; - -/** - * Tests that ManagerAPI returns a friendly HTTP 400 error in standalone mode - * (i.e. when the server is started without PD / hstore backend). - *

- * This class intentionally does NOT have a class-level Assume guard so that - * the tests are actually executed in non-hstore CI runs. - */ -public class ManagerApiStandaloneTest extends BaseApiTest { - - private static String managerPath(String graphSpace) { - return String.format("graphspaces/%s/auth/managers", graphSpace); - } - - @Before - public void skipForPdMode() { - assumeStandaloneMode(); - } - - @Test - public void testCreateManagerReturnsFriendlyError() { - String body = "{\"user\":\"admin\",\"type\":\"ADMIN\"}"; - Response r = this.client().post(managerPath("DEFAULT"), body); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testDeleteManagerReturnsFriendlyError() { - Response r = this.client().delete(managerPath("DEFAULT"), - Map.of("user", "admin", - "type", HugePermission.ADMIN)); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testListManagerReturnsFriendlyError() { - Response r = this.client().get(managerPath("DEFAULT"), - Map.of("type", (Object)HugePermission.ADMIN)); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testCheckRoleReturnsFriendlyError() { - Response r = this.client().get(managerPath("DEFAULT") + "/check", - Map.of("type", (Object)HugePermission.ADMIN)); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testGetRolesInGsReturnsFriendlyError() { - Response r = this.client().get(managerPath("DEFAULT") + "/role", - Map.of("user", (Object)"admin")); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testCreateSpaceManagerReturnsFriendlyError() { - String body = "{\"user\":\"admin\",\"type\":\"SPACE\"}"; - Response r = this.client().post(managerPath("nonexistent"), body); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testDeleteSpaceManagerReturnsFriendlyError() { - Response r = this.client().delete(managerPath("nonexistent"), - Map.of("user", "admin", - "type", HugePermission.SPACE)); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testListSpaceManagerReturnsFriendlyError() { - Response r = this.client().get(managerPath("nonexistent"), - Map.of("type", (Object) HugePermission.SPACE)); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testCheckRoleSpaceReturnsFriendlyError() { - Response r = this.client().get(managerPath("nonexistent") + "/check", - Map.of("type", (Object) HugePermission.SPACE)); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } - - @Test - public void testGetRolesInGsNonExistentReturnsFriendlyError() { - Response r = this.client().get(managerPath("nonexistent") + "/role", - Map.of("user", (Object) "admin")); - String content = assertResponseStatus(400, r); - Assert.assertTrue(content.contains(STANDALONE_ERROR)); - } -} diff --git a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/ManagerApiTest.java b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/ManagerApiTest.java index 095361f43e..afae0c94a9 100644 --- a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/ManagerApiTest.java +++ b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/api/ManagerApiTest.java @@ -68,13 +68,13 @@ private void deleteSpaceMembers() { Response r1 = this.client().get("/graphspaces"); String result = r1.readEntity(String.class); Map resultMap = JsonUtil.fromJson(result, Map.class); - List spaces = (List)resultMap.get("graphSpaces"); + List spaces = (List) resultMap.get("graphSpaces"); for (String space : spaces) { Response r = this.client().get(managerPath(space), ImmutableMap.of("type", HugePermission.SPACE_MEMBER)); result = r.readEntity(String.class); resultMap = JsonUtil.fromJson(result, Map.class); - List spaceAdmins = (List)resultMap.get("admins"); + List spaceAdmins = (List) resultMap.get("admins"); for (String user : spaceAdmins) { this.client().delete(managerPath(space), ImmutableMap.of("user", user, @@ -89,7 +89,7 @@ public void deleteAdmins() { ImmutableMap.of("type", HugePermission.ADMIN)); String result = r.readEntity(String.class); Map resultMap = JsonUtil.fromJson(result, Map.class); - List admins = (List)resultMap.get("admins"); + List admins = (List) resultMap.get("admins"); for (String user : admins) { if ("admin".equals(user)) { continue; @@ -103,13 +103,13 @@ public void deleteSpaceAdmins() { Response r1 = this.client().get("/graphspaces"); String result = r1.readEntity(String.class); Map resultMap = JsonUtil.fromJson(result, Map.class); - List spaces = (List)resultMap.get("graphSpaces"); + List spaces = (List) resultMap.get("graphSpaces"); for (String space : spaces) { Response r = this.client().get(managerPath(space), ImmutableMap.of("type", HugePermission.SPACE)); result = r.readEntity(String.class); resultMap = JsonUtil.fromJson(result, Map.class); - List spaceAdmins = (List)resultMap.get("admins"); + List spaceAdmins = (List) resultMap.get("admins"); for (String user : spaceAdmins) { this.client().delete(managerPath(space), ImmutableMap.of("user", user, @@ -124,7 +124,7 @@ public void deleteUsers() { if (user.get("user_name").equals("admin")) { continue; } - this.client().delete(USER_PATH, (String)user.get("id")); + this.client().delete(USER_PATH, (String) user.get("id")); } } @@ -153,8 +153,10 @@ public void testSpaceMemberCRUD() { client().get(managerPath("testspace") + "/check", ImmutableMap.of("type", HugePermission.SPACE_MEMBER)); - RestClient member1Client = new RestClient(baseUrl(), "test_member1", "password1"); - RestClient member2Client = new RestClient(baseUrl(), "test_member2", "password1"); + RestClient member1Client = + new RestClient(baseUrl(), "test_member1", "password1"); + RestClient member2Client = + new RestClient(baseUrl(), "test_member2", "password1"); String res1 = member1Client.get(managerPath("testspace") + "/check", ImmutableMap.of("type", @@ -212,8 +214,10 @@ public void testPermission() { r = client().post(managerPath("testspace"), spaceManager); assertResponseStatus(201, r); - RestClient spaceMemberClient = new RestClient(baseUrl(), "perm_member", "password1"); - RestClient spaceManagerClient = new RestClient(baseUrl(), "perm_manager", "password1"); + RestClient spaceMemberClient = + new RestClient(baseUrl(), "perm_member", "password1"); + RestClient spaceManagerClient = + new RestClient(baseUrl(), "perm_manager", "password1"); String userPath = "graphspaces/testspace/graphs/testgraph/auth/users"; String user = "{\"user_name\":\"" + "test_perm_user" + @@ -322,17 +326,15 @@ protected List> listUsers(String graphSpace, String graph) { Response r = this.client().get(userPath, ImmutableMap.of("limit", NO_LIMIT)); String result = assertResponseStatus(200, r); - TypeReference>>> typeRef = - new TypeReference>>>() { - }; - Map>> resultMap = JsonUtil.fromJson(result, - typeRef); + Map>> resultMap = + JsonUtil.fromJson(result, new TypeReference>>>() { + }); return resultMap.get("users"); } /** - * Test space manager boundary: SpaceA's manager cannot operate SpaceB's - * resources + * Test space manager boundary: SpaceA's manager cannot operate SpaceB's resources */ @Test public void testSpaceManagerBoundary() { @@ -489,11 +491,9 @@ public void testSpaceManagerCannotPromoteUsersInOtherSpaces() { response.contains("no permission")); // Verify: manageralpha CAN promote usertest to be spacealpha's member - // But this will fail because manageralpha doesn't have permission to read user - // from + // But this will fail because manageralpha doesn't have permission to read user from // DEFAULT space - // This is expected behavior - space managers should only manage users already - // in their + // This is expected behavior - space managers should only manage users already in their // space // or admin should assign users to spaces first @@ -640,8 +640,7 @@ public void testSpaceManagerAndMemberResourcePermissions() { String vertexJson = "{\"label\":\"person\",\"properties\":{\"age\":30}}"; r = managerClient.post(vertexPath, vertexJson); String response2 = r.readEntity(String.class); - // Note: Vertex write might require specific permissions depending on - // configuration + // Note: Vertex write might require specific permissions depending on configuration // We check if it's either allowed (201) or forbidden (403) int status = r.getStatus(); Assert.assertTrue("Status should be 201 or 403, but was: " + status, @@ -660,8 +659,7 @@ public void testSpaceManagerAndMemberResourcePermissions() { r = outsiderClient.post(vertexPath, vertexJson3); Assert.assertEquals(403, r.getStatus()); - // Test 7: Space manager can manage space members (already tested in other - // tests) + // Test 7: Space manager can manage space members (already tested in other tests) // Test 8: Space member cannot manage space members this.createUser("newuser"); String addMemberJson = "{\"user\":\"newuser\",\"type\":\"SPACE_MEMBER\"}"; diff --git a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/core/MultiGraphsTest.java b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/core/MultiGraphsTest.java index 4fae0f76c6..5c34236857 100644 --- a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/core/MultiGraphsTest.java +++ b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/core/MultiGraphsTest.java @@ -248,7 +248,7 @@ public void testCreateGraphsWithInvalidNames() { @Test public void testCreateGraphsWithSameName() { - List graphs = openGraphs("g", "g", "G"); + List graphs = openGraphs("gg", "gg", "GG"); HugeGraph g1 = graphs.get(0); HugeGraph g2 = graphs.get(1); HugeGraph g3 = graphs.get(2); diff --git a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/core/TaskCoreTest.java b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/core/TaskCoreTest.java index 212ccc0588..3811a46f02 100644 --- a/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/core/TaskCoreTest.java +++ b/hugegraph-server/hugegraph-test/src/main/java/org/apache/hugegraph/core/TaskCoreTest.java @@ -17,8 +17,8 @@ package org.apache.hugegraph.core; -import java.util.Arrays; import java.util.Iterator; +import java.util.List; import java.util.Random; import java.util.concurrent.TimeoutException; @@ -33,6 +33,7 @@ import org.apache.hugegraph.job.GremlinJob; import org.apache.hugegraph.job.JobBuilder; import org.apache.hugegraph.task.HugeTask; +import org.apache.hugegraph.task.StandardTaskScheduler; import org.apache.hugegraph.task.TaskCallable; import org.apache.hugegraph.task.TaskScheduler; import org.apache.hugegraph.task.TaskStatus; @@ -76,12 +77,14 @@ public void testTask() throws TimeoutException { Assert.assertEquals(id, task.id()); Assert.assertFalse(task.completed()); - Assert.assertThrows(IllegalArgumentException.class, () -> { - scheduler.delete(id, false); - }, e -> { - Assert.assertContains("Can't delete incomplete task '88888'", - e.getMessage()); - }); + if (scheduler.getClass().equals(StandardTaskScheduler.class)) { + Assert.assertThrows(IllegalArgumentException.class, () -> { + scheduler.delete(id, false); + }, e -> { + Assert.assertContains("Can't delete incomplete task '88888'", + e.getMessage()); + }); + } task = scheduler.waitUntilTaskCompleted(task.id(), 10); Assert.assertEquals(id, task.id()); @@ -89,7 +92,7 @@ public void testTask() throws TimeoutException { Assert.assertEquals(TaskStatus.SUCCESS, task.status()); Assert.assertEquals("test-task", scheduler.task(id).name()); - Assert.assertEquals("test-task", scheduler.tasks(Arrays.asList(id)) + Assert.assertEquals("test-task", scheduler.tasks(List.of(id)) .next().name()); Iterator> iter = scheduler.tasks(ImmutableList.of(id)); @@ -196,13 +199,18 @@ public Object execute() throws Exception { Assert.assertEquals("test", task.type()); Assert.assertFalse(task.completed()); - HugeTask task2 = scheduler.waitUntilTaskCompleted(task.id(), 10); + // Ephemeral tasks are node-local and not persisted to DB. + // Use Future.get() to wait for completion instead of ID-based lookup. + try { + task.get(10, java.util.concurrent.TimeUnit.SECONDS); + } catch (Exception e) { + throw new RuntimeException("Ephemeral task execution failed", e); + } + Assert.assertEquals(TaskStatus.SUCCESS, task.status()); Assert.assertEquals("{\"k1\":13579,\"k2\":\"24680\"}", task.result()); - Assert.assertEquals(TaskStatus.SUCCESS, task2.status()); - Assert.assertEquals("{\"k1\":13579,\"k2\":\"24680\"}", task2.result()); - + // Ephemeral tasks are not stored in DB, so these should throw NotFoundException Assert.assertThrows(NotFoundException.class, () -> { scheduler.waitUntilTaskCompleted(task.id(), 10); }); @@ -557,7 +565,12 @@ public void testGremlinJobAndCancel() throws TimeoutException { scheduler.cancel(task); task = scheduler.task(task.id()); - Assert.assertEquals(TaskStatus.CANCELLING, task.status()); + // For DistributedTaskScheduler, local cancel may result in CANCELLED directly + // (task thread updates status after being interrupted) + // or CANCELLING (if task hasn't processed the interrupt yet) + Assert.assertTrue("Task status should be CANCELLING or CANCELLED, but was " + task.status(), + task.status() == TaskStatus.CANCELLING || + task.status() == TaskStatus.CANCELLED); task = scheduler.waitUntilTaskCompleted(task.id(), 10); Assert.assertEquals(TaskStatus.CANCELLED, task.status()); @@ -629,46 +642,51 @@ public void testGremlinJobAndRestore() throws Exception { scheduler.cancel(task); task = scheduler.task(task.id()); - Assert.assertEquals(TaskStatus.CANCELLING, task.status()); + Assert.assertTrue("Task status should be CANCELLING or CANCELLED, but was " + task.status(), + task.status() == TaskStatus.CANCELLING || + task.status() == TaskStatus.CANCELLED); task = scheduler.waitUntilTaskCompleted(task.id(), 10); Assert.assertEquals(TaskStatus.CANCELLED, task.status()); Assert.assertTrue("progress=" + task.progress(), 0 < task.progress() && task.progress() < 10); Assert.assertEquals(0, task.retries()); - Assert.assertEquals(null, task.result()); + Assert.assertNull(task.result()); HugeTask finalTask = task; - Assert.assertThrows(IllegalArgumentException.class, () -> { - Whitebox.invoke(scheduler.getClass(), "restore", scheduler, - finalTask); - }, e -> { - Assert.assertContains("No need to restore completed task", - e.getMessage()); - }); - HugeTask task2 = scheduler.task(task.id()); - Assert.assertThrows(IllegalArgumentException.class, () -> { + // because Distributed do nothing in restore, so only test StandardTaskScheduler here + if (scheduler.getClass().equals(StandardTaskScheduler.class)) { + Assert.assertThrows(IllegalArgumentException.class, () -> { + Whitebox.invoke(scheduler.getClass(), "restore", scheduler, + finalTask); + }, e -> { + Assert.assertContains("No need to restore completed task", + e.getMessage()); + }); + + HugeTask task2 = scheduler.task(task.id()); + Assert.assertThrows(IllegalArgumentException.class, () -> { + Whitebox.invoke(scheduler.getClass(), "restore", scheduler, task2); + }, e -> { + Assert.assertContains("No need to restore completed task", + e.getMessage()); + }); + + Whitebox.setInternalState(task2, "status", TaskStatus.RUNNING); Whitebox.invoke(scheduler.getClass(), "restore", scheduler, task2); - }, e -> { - Assert.assertContains("No need to restore completed task", - e.getMessage()); - }); - - Whitebox.setInternalState(task2, "status", TaskStatus.RUNNING); - Whitebox.invoke(scheduler.getClass(), "restore", scheduler, task2); - Assert.assertThrows(IllegalArgumentException.class, () -> { - Whitebox.invoke(scheduler.getClass(), "restore", scheduler, task2); - }, e -> { - Assert.assertContains("is already in the queue", e.getMessage()); - }); - - scheduler.waitUntilTaskCompleted(task2.id(), 10); - sleepAWhile(500); - Assert.assertEquals(10, task2.progress()); - Assert.assertEquals(1, task2.retries()); - Assert.assertEquals("100", task2.result()); + Assert.assertThrows(IllegalArgumentException.class, () -> { + Whitebox.invoke(scheduler.getClass(), "restore", scheduler, task2); + }, e -> { + Assert.assertContains("is already in the queue", e.getMessage()); + }); + scheduler.waitUntilTaskCompleted(task2.id(), 10); + sleepAWhile(500); + Assert.assertEquals(10, task2.progress()); + Assert.assertEquals(1, task2.retries()); + Assert.assertEquals("100", task2.result()); + } } private HugeTask runGremlinJob(String gremlin) { diff --git a/hugegraph-server/pom.xml b/hugegraph-server/pom.xml index 1ef5ed9574..a4dac32ab5 100644 --- a/hugegraph-server/pom.xml +++ b/hugegraph-server/pom.xml @@ -37,7 +37,7 @@ ${project.basedir}/.. - apache-${release.name}-server-${project.version} + apache-${release.name}-server-incubating-${project.version} 1.7.5 1.2.17 2.17.1 diff --git a/hugegraph-store/Dockerfile b/hugegraph-store/Dockerfile index e14a310338..c0b4b71cbd 100644 --- a/hugegraph-store/Dockerfile +++ b/hugegraph-store/Dockerfile @@ -30,7 +30,7 @@ RUN mvn package $MAVEN_ARGS -e -B -ntp -Dmaven.test.skip=true -Dmaven.javadoc.sk # Note: ZGC (The Z Garbage Collector) is only supported on ARM-Mac with java > 13 FROM eclipse-temurin:11-jre-jammy -COPY --from=build /pkg/hugegraph-store/apache-hugegraph-store-*/ /hugegraph-store/ +COPY --from=build /pkg/hugegraph-store/apache-hugegraph-store-incubating-*/ /hugegraph-store/ LABEL maintainer="HugeGraph Docker Maintainers " # TODO: use g1gc or zgc as default diff --git a/hugegraph-store/README.md b/hugegraph-store/README.md index 10f6e61587..ba41ab95ca 100644 --- a/hugegraph-store/README.md +++ b/hugegraph-store/README.md @@ -104,12 +104,12 @@ From the project root: mvn install -pl hugegraph-struct -am -DskipTests # Build Store and all dependencies -mvn clean package -pl hugegraph-store/hg-store-dist -am -DskipTests +mvn clean package -pl hugegraph-store/hugegraph-store-dist -am -DskipTests ``` The assembled distribution will be available at: ``` -hugegraph-store/apache-hugegraph-store-/lib/hg-store-node-.jar +hugegraph-store/apache-hugegraph-store-incubating-1.7.0/lib/hg-store-node-1.7.0.jar``` ``` ### Configuration @@ -214,9 +214,7 @@ Start the Store server: ```bash # Replace {version} with your hugegraph version -# For historical 1.7.0 and earlier releases, use -# apache-hugegraph-store-incubating-{version} instead. -cd apache-hugegraph-store-{version} +cd apache-hugegraph-store-incubating-{version} # Start Store node bin/start-hugegraph-store.sh diff --git a/hugegraph-store/docs/deployment-guide.md b/hugegraph-store/docs/deployment-guide.md index e92e99171f..d45b713c42 100644 --- a/hugegraph-store/docs/deployment-guide.md +++ b/hugegraph-store/docs/deployment-guide.md @@ -416,7 +416,6 @@ df -h ```bash # Extract PD distribution -# Note: use "-incubating" only for historical 1.7.0 and earlier package/directory names. tar -xzf apache-hugegraph-pd-incubating-1.7.0.tar.gz cd apache-hugegraph-pd-incubating-1.7.0 @@ -510,7 +509,6 @@ curl http://192.168.1.10:8620/v1/members ```bash # Extract Store distribution -# Note: use "-incubating" only for historical 1.7.0 and earlier package/directory names. tar -xzf apache-hugegraph-store-incubating-1.7.0.tar.gz cd apache-hugegraph-store-incubating-1.7.0 @@ -597,7 +595,7 @@ curl http://192.168.1.10:8620/v1/stores "address":"192.168.1.10:8500", "raftAddress":"192.168.1.10:8510", "version":"","state":"Up", - "deployPath":"/Users/user/hugegraph/hugegraph-store/hg-store-node/target/classes/", + "deployPath":"/Users/user/incubator-hugegraph/hugegraph-store/hg-store-node/target/classes/", "dataPath":"./storage", "startTimeStamp":1761818547335, "registedTimeStamp":1761818547335, @@ -628,7 +626,6 @@ curl http://192.168.1.10:8620/v1/stores ```bash # Extract Server distribution -# Note: use "-incubating" only for historical 1.7.0 and earlier package/directory names. tar -xzf apache-hugegraph-incubating-1.7.0.tar.gz cd apache-hugegraph-incubating-1.7.0 diff --git a/hugegraph-store/docs/development-guide.md b/hugegraph-store/docs/development-guide.md index 44136776e7..c255e56827 100644 --- a/hugegraph-store/docs/development-guide.md +++ b/hugegraph-store/docs/development-guide.md @@ -58,7 +58,7 @@ git checkout 1.7-rebase 2. Add new "Application" configuration: - Main class: `org.apache.hugegraph.store.node.StoreNodeApplication` - VM options: `-Xms4g -Xmx4g -Dconfig.file=conf/application.yml` - - Working directory: `hugegraph-store/apache-hugegraph-store-` (`apache-hugegraph-store-incubating-` for historical 1.7.0 and earlier directories) + - Working directory: `hugegraph-store/hg-store-dist/target/apache-hugegraph-store-incubating-1.7.0` - Use classpath of module: `hg-store-node` ### Build from Source @@ -216,9 +216,7 @@ hg-store-grpc/ **Start Server**: ```bash -# Historical 1.7.0 and earlier directories use -# apache-hugegraph-store-incubating- instead. -cd hugegraph-store/apache-hugegraph-store- +cd hugegraph-store/hg-store-dist/target/apache-hugegraph-store-incubating-1.7.0 bin/start-hugegraph-store.sh ``` @@ -244,7 +242,7 @@ mvn compile ```bash mvn clean package -DskipTests -# Output: hugegraph-store/apache-hugegraph-store-.tar.gz +# Output: hg-store-dist/target/apache-hugegraph-store-incubating-.tar.gz ``` **Regenerate gRPC stubs** (after modifying `.proto` files): diff --git a/hugegraph-store/docs/operations-guide.md b/hugegraph-store/docs/operations-guide.md index f46b5559d7..a937d52bff 100644 --- a/hugegraph-store/docs/operations-guide.md +++ b/hugegraph-store/docs/operations-guide.md @@ -593,7 +593,6 @@ curl http://192.168.1.10:8620/v1/partitionsAndStatus 1. **Deploy New Store Node**: ```bash # Follow deployment guide - # Historical 1.7.0 packages still include the "-incubating" suffix tar -xzf apache-hugegraph-store-incubating-1.7.0.tar.gz cd apache-hugegraph-store-incubating-1.7.0 @@ -673,9 +672,9 @@ bin/stop-hugegraph-store.sh # Backup current version mv apache-hugegraph-store-incubating-1.7.0 apache-hugegraph-store-incubating-1.7.0-backup -# Extract new version (newer releases no longer include "-incubating") -tar -xzf apache-hugegraph-store-1.8.0.tar.gz -cd apache-hugegraph-store-1.8.0 +# Extract new version +tar -xzf apache-hugegraph-store-incubating-1.8.0.tar.gz +cd apache-hugegraph-store-incubating-1.8.0 # Copy configuration from backup cp ../apache-hugegraph-store-incubating-1.7.0-backup/conf/application.yml conf/ @@ -715,7 +714,7 @@ If upgrade fails: bin/stop-hugegraph-store.sh # Restore backup -rm -rf apache-hugegraph-store-1.8.0 +rm -rf apache-hugegraph-store-incubating-1.8.0 mv apache-hugegraph-store-incubating-1.7.0-backup apache-hugegraph-store-incubating-1.7.0 cd apache-hugegraph-store-incubating-1.7.0 diff --git a/hugegraph-store/hg-store-dist/docker/docker-entrypoint.sh b/hugegraph-store/hg-store-dist/docker/docker-entrypoint.sh old mode 100755 new mode 100644 index 1bdaaafc5a..5aa77621dc --- a/hugegraph-store/hg-store-dist/docker/docker-entrypoint.sh +++ b/hugegraph-store/hg-store-dist/docker/docker-entrypoint.sh @@ -15,67 +15,8 @@ # See the License for the specific language governing permissions and # limitations under the License. # -set -euo pipefail -log() { echo "[hugegraph-store-entrypoint] $*"; } +# start hugegraph store +./bin/start-hugegraph-store.sh -j "$JAVA_OPTS" -require_env() { - local name="$1" - if [[ -z "${!name:-}" ]]; then - echo "ERROR: missing required env '${name}'" >&2; exit 2 - fi -} - -json_escape() { - local s="$1" - s=${s//\\/\\\\}; s=${s//\"/\\\"}; s=${s//$'\n'/} - printf "%s" "$s" -} - -# ── Guard deprecated vars ───────────────────────────────────────────── -migrate_env() { - local old_name="$1" new_name="$2" - - if [[ -n "${!old_name:-}" && -z "${!new_name:-}" ]]; then - log "WARN: deprecated env '${old_name}' detected; mapping to '${new_name}'" - export "${new_name}=${!old_name}" - fi -} - -migrate_env "PD_ADDRESS" "HG_STORE_PD_ADDRESS" -migrate_env "GRPC_HOST" "HG_STORE_GRPC_HOST" -migrate_env "RAFT_ADDRESS" "HG_STORE_RAFT_ADDRESS" -# ── Required vars ───────────────────────────────────────────────────── -require_env "HG_STORE_PD_ADDRESS" -require_env "HG_STORE_GRPC_HOST" -require_env "HG_STORE_RAFT_ADDRESS" - -# ── Defaults ────────────────────────────────────────────────────────── -: "${HG_STORE_GRPC_PORT:=8500}" -: "${HG_STORE_REST_PORT:=8520}" -: "${HG_STORE_DATA_PATH:=/hugegraph-store/storage}" - -# ── Build SPRING_APPLICATION_JSON ───────────────────────────────────── -SPRING_APPLICATION_JSON="$(cat < 2.15.0 - apache-${release.name}-store-${project.version} + apache-${release.name}-store-incubating-${project.version} diff --git a/hugegraph-struct/src/main/java/org/apache/hugegraph/options/CoreOptions.java b/hugegraph-struct/src/main/java/org/apache/hugegraph/options/CoreOptions.java index 849539419b..caf0146bb9 100644 --- a/hugegraph-struct/src/main/java/org/apache/hugegraph/options/CoreOptions.java +++ b/hugegraph-struct/src/main/java/org/apache/hugegraph/options/CoreOptions.java @@ -295,13 +295,7 @@ public class CoreOptions extends OptionHolder { rangeInt(1, 500), 1 ); - public static final ConfigOption SCHEDULER_TYPE = - new ConfigOption<>( - "task.scheduler_type", - "The type of scheduler used in distribution system.", - allowValues("local", "distributed"), - "local" - ); + public static final ConfigOption TASK_SYNC_DELETION = new ConfigOption<>( "task.sync_deletion", diff --git a/install-dist/pom.xml b/install-dist/pom.xml index 45de069d07..0b6ffa9901 100644 --- a/install-dist/pom.xml +++ b/install-dist/pom.xml @@ -29,7 +29,7 @@ install-dist - apache-${release.name}-${project.version} + apache-${release.name}-incubating-${project.version} @@ -50,10 +50,10 @@ cd $root_path || exit mkdir -p ${final.name} - cp -r -v $root_path/hugegraph-pd/apache-hugegraph-pd-${project.version} ${final.name}/ || exit - cp -r -v $root_path/hugegraph-store/apache-hugegraph-store-${project.version} ${final.name}/ || exit - cp -r -v $root_path/hugegraph-server/apache-hugegraph-server-${project.version} ${final.name}/ || exit - cp -r -v $root_path/install-dist/release-docs/* ${final.name}/ || exit + cp -r -v $root_path/hugegraph-pd/apache-hugegraph-pd-incubating-${project.version} ${final.name}/ || exit + cp -r -v $root_path/hugegraph-store/apache-hugegraph-store-incubating-${project.version} ${final.name}/ || exit + cp -r -v $root_path/hugegraph-server/apache-hugegraph-server-incubating-${project.version} ${final.name}/ || exit + cp -r -v $root_path/install-dist/release-docs/* $root_path/DISCLAIMER ${final.name}/ || exit tar zcvf $root_path/target/${final.name}.tar.gz ./${final.name} || exit 1 diff --git a/install-dist/release-docs/LICENSE b/install-dist/release-docs/LICENSE index bd00dbc118..9a1afd7663 100644 --- a/install-dist/release-docs/LICENSE +++ b/install-dist/release-docs/LICENSE @@ -203,9 +203,9 @@ ============================================================================ - APACHE HUGEGRAPH SUBCOMPONENTS: + APACHE HUGEGRAPH (Incubating) SUBCOMPONENTS: - The Apache HugeGraph project contains subcomponents with separate copyright + The Apache HugeGraph(Incubating) project contains subcomponents with separate copyright notices and license terms. Your use of the source code for the these subcomponents is subject to the terms and conditions of the following licenses. diff --git a/install-dist/release-docs/NOTICE b/install-dist/release-docs/NOTICE index 247b9dadd0..f3eb6d4cc4 100644 --- a/install-dist/release-docs/NOTICE +++ b/install-dist/release-docs/NOTICE @@ -1,5 +1,5 @@ -Apache HugeGraph -Copyright 2022-2026 The Apache Software Foundation +Apache HugeGraph(incubating) +Copyright 2022-2025 The Apache Software Foundation This product includes software developed at The Apache Software Foundation (http://www.apache.org/). diff --git a/install-dist/scripts/apache-release.sh b/install-dist/scripts/apache-release.sh index 0eb721ae8b..168217c425 100755 --- a/install-dist/scripts/apache-release.sh +++ b/install-dist/scripts/apache-release.sh @@ -41,13 +41,13 @@ rm -rf dist && mkdir -p dist/apache-${REPO} # step1: package the source code cd ../../ && echo "Package source in: $(pwd)" git archive --format=tar.gz \ - --output="install-dist/scripts/dist/apache-${REPO}/apache-${REPO}-${RELEASE_VERSION}-src.tar.gz" \ - --prefix=apache-${REPO}-"${RELEASE_VERSION}"-src/ "${GIT_BRANCH}" || exit + --output="install-dist/scripts/dist/apache-${REPO}/apache-${REPO}-incubating-${RELEASE_VERSION}-src.tar.gz" \ + --prefix=apache-${REPO}-incubating-"${RELEASE_VERSION}"-src/ "${GIT_BRANCH}" || exit cd - || exit # step2: copy the binary file (Optional) # Note: it's optional for project to generate binary package (skip this step if not need) -cp -v ../../target/apache-${REPO}-"${RELEASE_VERSION}".tar.gz dist/apache-${REPO} || exit +cp -v ../../target/apache-${REPO}-incubating-"${RELEASE_VERSION}".tar.gz dist/apache-${REPO} || exit # step3: sign + hash ##### 3.1 sign in source & binary package @@ -80,7 +80,7 @@ SVN_DIR="${GROUP}-svn-dev" cd ../ rm -rfv ${SVN_DIR} -svn co "https://dist.apache.org/repos/dist/dev/${GROUP}" ${SVN_DIR} +svn co "https://dist.apache.org/repos/dist/dev/incubator/${GROUP}" ${SVN_DIR} ##### 4.2 copy new release package to svn directory mkdir -p ${SVN_DIR}/"${RELEASE_VERSION}" diff --git a/pom.xml b/pom.xml index 850ac99fa8..d2595823fa 100644 --- a/pom.xml +++ b/pom.xml @@ -47,7 +47,7 @@ - Apache HugeGraph + Apache Hugegraph(incubating) dev-subscribe@hugegraph.apache.org https://hugegraph.apache.org/ @@ -58,7 +58,7 @@ Development Mailing List dev-subscribe@hugegraph.apache.org dev-unsubscribe@hugegraph.apache.org - dev@hugegraph.apache.org + dev@hugegraph.incubator.apache.org Commits List