API Changes in ArangoDB 3.11

A summary of the changes to the HTTP API and other interfaces that are relevant for developers, like maintainers of drivers and integrations for ArangoDB

HTTP RESTful API

Behavior changes

Extended naming constraints for collections, Views, and indexes

In ArangoDB 3.9, the --database.extended-names-databases startup option was added to optionally allow database names to contain most UTF-8 characters. The startup option has been renamed to --database.extended-names in 3.11 and now controls whether you want to use the extended naming constraints for database, collection, View, and index names.

The feature is disabled by default to ensure compatibility with existing client drivers and applications that only support ASCII names according to the traditional naming constraints used in previous ArangoDB versions.

If the feature is enabled, then any endpoints that contain database, collection, View, or index names in the URL may contain special characters that were previously not allowed (percent-encoded). They are also to be expected in payloads that contain database, collection, View, or index names, as well as document identifiers (because they are comprised of the collection name and the document key). If client applications assemble URLs with extended names programmatically, they need to ensure that extended names are properly URL-encoded.

When using extended names, any Unicode characters in names need to be NFC-normalized . If you try to create a database, collection, View, or index with a non-NFC-normalized name, the server rejects it.

The ArangoDB web interface as well as the arangobench, arangodump, arangoexport, arangoimport, arangorestore, and arangosh client tools ship with support for the extended naming constraints, but they require you to provide NFC-normalized names.

Please be aware that dumps containing extended names cannot be restored into older versions that only support the traditional naming constraints. In a cluster setup, it is required to use the same naming constraints for all Coordinators and DB-Servers of the cluster. Otherwise, the startup is refused. In DC2DC setups, it is also required to use the same naming constraints for both datacenters to avoid incompatibilities.

Also see:

Stricter validation of Unicode surrogate values in JSON data

ArangoDB 3.11 employs a stricter validation of Unicode surrogate pairs in incoming JSON data, for all REST APIs.

In previous versions, the following loopholes existed when validating UTF-8 surrogate pairs in incoming JSON data:

  • a high surrogate, followed by something other than a low surrogate (or the end of the string)
  • a low surrogate, not preceded by a high surrogate

These validation loopholes have been closed in 3.11, which means that any JSON inputs containing such invalid surrogate pair data are rejected by the server.

This is normally the desired behavior, as it helps invalid data from entering the database. However, in situations when a database is known to contain invalid data and must continue supporting it (at least temporarily), the extended validation can be disabled by setting the server startup option --server.validate-utf8-strings to false. This is not recommended long-term, but only during upgrading or data cleanup.

Status code if write concern not fulfilled

The new --cluster.failed-write-concern-status-code startup option can be used to change the default 403 status code to 503 when the write concern cannot be fulfilled for a write operation to a collection in a cluster deployment. This signals client applications that it is a temporary error. Only the HTTP status code changes in this case, no automatic retry of the operation is attempted by the cluster.

Graph API (Gharial)

The POST /_api/gharial/ endpoint for creating named graphs validates the satellites property of the graph options for SmartGraphs differently now.

If the satellites property is set, it must be an array, either empty or with one or more collection name strings. If the value is not in that format, the error “Missing array for field satellites” is now returned, for example, if it is a string or a null value. Previously, it returned “invalid parameter type”. If the graph is not a SmartGraph, the satellites property is ignored unless its value is an array but its elements are not strings, in which case the error “Invalid parameter type” is returned.

Validation of smartGraphAttribute in SmartGraphs

Introduced in: v3.10.13, v3.11.7

The attribute defined by the smartGraphAttribute graph property is not allowed to be changed in the documents of SmartGraph vertex collections. This is now strictly enforced. You must set the attribute when creating a document. Any attempt to modify or remove the attribute afterward by update or replace operations now throws an error. Previously, the smartGraphAttribute value was checked only when inserting documents into a SmartGraph vertex collection, but not for update or replace operations.

The missing checks on update and replace operations allowed to retroactively modify the value of the smartGraphAttribute for existing documents, which could have led to problems when the data of such a SmartGraph vertex collection was replicated to a new follower shard. On the new follower shard, the documents went through the full validation and led to documents with modified smartGraphAttribute values being rejected on the follower. This could have led to follower shards not getting in sync.

Now, the value of the smartGraphAttribute is fully validated with every insert, update, or replace operation, and every attempt to modify the value of the smartGraphAttribute retroactively fails with the 4003 error, ERROR_KEY_MUST_BE_PREFIXED_WITH_SMART_GRAPH_ATTRIBUTE. Additionally, if upon insertion the smartGraphAttribute is missing for a SmartGraph vertex, the error code is error 4001, ERROR_NO_SMART_GRAPH_ATTRIBUTE.

To retroactively repair the data in any of the affected collections, it is possible to update every (affected) document with the correct value of the smartGraphAttribute via an AQL query as follows:

FOR doc IN @@collection
  LET expected = SUBSTRING(doc._key, 0, FIND_FIRST(doc._key, ':'))
  LET actual = doc.@attr
  FILTER expected != actual
  UPDATE doc WITH {@attr: expected} IN @@collection
  COLLECT WITH COUNT INTO updated
  RETURN updated

This updates all documents with the correct (expected) value of the smartGraphAttribute if it deviates from the expected value. The query returns the number of updated documents as well.

The bind parameters necessary to run this query are:

  • @@collection: name of a SmartGraph vertex collection to be updated
  • @attr: attribute name of the smartGraphAttribute of the collection

Database API

The POST /_api/database endpoint for creating a new database has changed. If the specified database name is invalid/illegal, it now returns the error code 1208 (ERROR_ARANGO_ILLEGAL_NAME). It previously returned 1229 (ERROR_ARANGO_DATABASE_NAME_INVALID) in this case.

This is a downwards-incompatible change, but unifies the behavior for database creation with the behavior of collection and View creation, which also return the error code 1208 in case the specified name is not allowed.

Document API

The following endpoints support a new refillIndexCaches query parameter to repopulate the index caches after requests that insert, update, replace, or remove single or multiple documents (including edges) if this affects an edge index or cache-enabled persistent indexes:

  • POST /_api/document/{collection}
  • PATCH /_api/document/{collection}/{key}
  • PUT /_api/document/{collection}/{key}
  • DELETE /_api/document/{collection}/{key}

It is a boolean option and the default is false.

This also applies to the INSERT, UPDATE, REPLACE, and REMOVE operations in AQL queries, which support a refillIndexCache option, too.

In 3.9 and 3.10, refillIndexCaches was experimental and limited to edge caches.


Introduced in: v3.11.1

When inserting multiple documents/edges at once in a cluster, the Document API used to let the entire request fail if any of the documents/edges failed to be saved due to a key error. More specifically, if the value of a _key attribute contains illegal characters or if the key doesn’t meet additional requirements, for instance, coming from the collection being used in a Disjoint SmartGraph, the POST /_api/document/{collection} endpoint would not reply with the usual array of either the document metadata or the error object for each attempted document insertion. Instead, it used to return an error object for the first offending document only, and aborted the operation with an HTTP 400 Bad Request status code so that none of the documents were saved. Example:

> curl -d '[{"_key":"valid"},{"_key":"invalid space"}]' http://localhost:8529/_api/document/coll
{"code":400,"error":true,"errorMessage":"illegal document key","errorNum":1221}

> curl http://localhost:8529/_api/document/coll/valid
{"code":404,"error":true,"errorMessage":"document not found","errorNum":1202}

Now, such key errors in cluster deployments no longer fail the entire request, matching the behavior of single server deployments. Any errors are reported in the result array for the respective documents, along with the successful ones:

> curl -d '[{"_key":"valid"},{"_key":"invalid space"}]' http://localhost:8529/_api/document/coll
[{"_id":"coll/valid","_key":"valid","_rev":"_gG9JHsW---"},{"error":true,"errorNum":1221,"errorMessage":"illegal document key"}]

> curl http://localhost:8529/_api/document/coll/valid
{"_key":"valid","_id":"coll/valid","_rev":"_gG9JHsW---"}

Introduced in: v3.11.1

Using the Document API for reading multiple documents used to return an error if the request body was an empty array. Example:

> curl -XPUT -d '[]' 'http://localhost:8529/_api/document/coll?onlyget=true'
{"code":500,"error":true,"errorMessage":"internal error","errorNum":4}

Now, a request like this succeeds and returns an empty array as response.

Collection API

The edge collections of EnterpriseGraphs and SmartGraphs (including Disjoint SmartGraphs and SmartGraphs using SatelliteCollections but excluding the edge collections of the SatelliteCollections) previously reported a value of 0 as the numberOfShards. They now return the actual number of shards. This value can be higher than the configured numberOfShards value of the graph due to internally used hidden collections.

Cursor API

When you link a collection to an arangosearch View and run an AQL query against this View while it is still being indexed, you now receive the query result including a warning. This warning alerts you about potentially incomplete results obtained from a partially indexed collection. The error code associated with this warning is 1240 (ERROR_ARANGO_INCOMPLETE_READ).


Introduced in: v3.9.11, v3.10.7

In AQL graph traversals (POST /_api/cursor endpoint), you can restrict the vertex and edge collections in the traversal options like so:

FOR v, e, p IN 1..3 OUTBOUND 'products/123' components
  OPTIONS {
    vertexCollections: [ "bolts", "screws" ],
    edgeCollections: [ "productsToBolts", "productsToScrews" ]
  }
  RETURN v 

If you specify collections that don’t exist, queries now fail with a “collection or view not found” error (code 1203 and HTTP status 404 Not Found). In previous versions, unknown vertex collections were ignored, and the behavior for unknown edge collections was undefined.

Additionally, the collection types are now validated. If a document collection or View is specified in edgeCollections, an error is raised (code 1218 and HTTP status 400 Bad Request).

Furthermore, it is now an error if you specify a vertex collection that is not part of the specified named graph (code 1926 and HTTP status 404 Not Found). It is also an error if you specify an edge collection that is not part of the named graph’s definition or of the list of edge collections (code 1939 and HTTP status 400 Bad Request).

Log API

Setting the log level for the graphs log topic to TRACE now logs detailed information about AQL graph traversals and (shortest) path searches. Some new log messages are also logged for the DEBUG level.

Disabled Foxx APIs

Introduced in: v3.10.5

A --foxx.enable startup option has been added to arangod. It defaults to true. If the option is set to false, access to Foxx services is forbidden and is responded with an HTTP 403 Forbidden error. Access to the management APIs for Foxx services are also disabled as if --foxx.api false is set manually.

Configurable whitespace in metrics

Introduced in: v3.10.6

The output format of the /_admin/metrics and /_admin/metrics/v2 endpoints slightly changes for metrics with labels. By default, the metric label and value are separated by a space for improved compatibility with some tools. This is controlled by the new --server.ensure-whitespace-metrics-format startup option, which is enabled by default from v3.10.6 onward. Example:

Enabled:

arangodb_agency_cache_callback_number{role="SINGLE"} 0

Disabled:

arangodb_agency_cache_callback_number{role="SINGLE"}0

Limit to the number of databases in a deployment

Introduced in: v3.10.10, v3.11.2

The new --database.max-databases startup option can cap the number of databases and creating databases using the POST /_api/database endpoint can thus now fail for this reason if your deployment is at or above the configured maximum. Example:

{
  "code": 400,
  "error": true,
  "errorMessage": "unable to create additional database because it would exceed the configured maximum number of databases (2)",
  "errorNum": 32
}

Endpoint return value changes

Introduced in: v3.8.8, v3.9.4, v3.10.1

Changed the encoding of revision IDs returned by the below listed REST APIs:

  • GET /_api/collection/<collection-name>/revision: The revision ID was previously returned as numeric value, and now it is returned as a string value with either numeric encoding or HLC-encoding inside.
  • GET /_api/collection/<collection-name>/checksum: The revision ID in the revision attribute was previously encoded as a numeric value in single server, and as a string in cluster. This is now unified so that the revision attribute always contains a string value with either numeric encoding or HLC-encoding inside.

Endpoints added

Maintenance mode for DB-Servers

Introduced in: v3.10.1

For rolling upgrades or rolling restarts, DB-Servers can now be put into maintenance mode, so that no attempts are made to re-distribute the data in a cluster for such planned events. DB-Servers in maintenance mode are not considered viable failover targets because they are likely restarted soon.

To query the maintenance status of a DB-Server, use this new endpoint:

GET /_admin/cluster/maintenance/<DB-Server-ID>

An example reply of a DB-Server that is in maintenance mode:

{
  "error": false,
  "code": 200,
  "result": {
    "Mode": "maintenance",
    "Until": "2022-10-26T06:14:23Z"
  }
}

If the DB-Server is not in maintenance mode, then the result attribute is omitted:

{
  "error": false,
  "code": 200,
}

To put a DB-Server into maintenance mode, use this new endpoint:

PUT /_admin/cluster/maintenance/<DB-Server-ID>

The payload of the request needs to be as follows, with the timeout in seconds:

{
  "mode": "maintenance",
  "timeout": 360
}

To turn the maintenance mode off, set mode to "normal" instead, and omit the timeout attribute or set it to 0.

You can send another request when the DB-Server is already in maintenance mode to extend the timeout.

The maintenance mode ends automatically after the defined timeout.

Also see the HTTP interface for cluster maintenance.

Shard usage metrics

Introduced in: v3.10.13, v3.11.7

With GET /_admin/usage-metrics you can retrieve detailed shard usage metrics on DB-Servers.

These metrics can be enabled by setting the --server.export-shard-usage-metrics startup option to enabled-per-shard to make DB-Servers collect per-shard usage metrics, or to enabled-per-shard-per-user to make DB-Servers collect usage metrics per shard and per user whenever a shard is accessed.

For more information, see the HTTP API description and Monitoring per collection/database/user.

Endpoints augmented

Cursor API

  • The POST /_api/cursor and POST /_api/cursor/{cursor-identifier} endpoints can now return an additional statistics value in the stats sub-attribute, intermediateCommits. It is the total number of intermediate commits the query has performed. This number can only be greater than zero for data modification queries that perform modifications beyond the --rocksdb.intermediate-commit-count or --rocksdb.intermediate-commit-size thresholds. In clusters, the intermediate commits are tracked per DB-Server that participates in the query and are summed up in the end.

  • The /_api/cursor endpoint accepts a new allowRetry attribute in the options object. Set this option to true to make it possible to retry fetching the latest batch from a cursor. The default is false.

    If retrieving a result batch fails because of a connection issue, you can ask for that batch again using the new POST /_api/cursor/<cursor-id>/<batch-id> endpoint. The first batch has an ID of 1 and the value is incremented by 1 with every batch. Every result response except the last one also includes a nextBatchId attribute, indicating the ID of the batch after the current. You can remember and use this batch ID should retrieving the next batch fail.

    You can only request the latest batch again (or the next batch). Earlier batches are not kept on the server-side. Requesting a batch again does not advance the cursor.

    You can also call this endpoint with the next batch identifier, i.e. the value returned in the nextBatchId attribute of a previous request. This advances the cursor and returns the results of the next batch. This is only supported if there are more results in the cursor (i.e. hasMore is true in the latest batch).

    From v3.11.1 onward, you may use the POST /_api/cursor/<cursor-id>/<batch-id> endpoint even if the allowRetry attribute is false to fetch the next batch, but you cannot request a batch again unless you set it to true. The nextBatchId attribute is always present in result objects (except in the last batch) from v3.11.1 onward.

    To allow refetching of the very last batch of the query, the server cannot automatically delete the cursor. After the first attempt of fetching the last batch, the server would normally delete the cursor to free up resources. As you might need to reattempt the fetch, it needs to keep the final batch when the allowRetry option is enabled. Once you successfully received the last batch, you should call the DELETE /_api/cursor/<cursor-id> endpoint so that the server doesn’t unnecessarily keep the batch until the cursor times out (ttl query option).

  • When profiling a query (profile option true, 1, or 2), the profile object returned under extra now includes a new "instantiating executors" attribute with the time needed to create the query executors, and in cluster mode, this also includes the time needed for physically distributing the query snippets to the participating DB-Servers. Previously, the time spent for instantiating executors and the physical distribution was contained in the optimizing plan stage.

  • The endpoint supports a new maxDNFConditionMembers query option, which is a threshold for the maximum number of OR sub-nodes in the internal representation of an AQL FILTER condition and defaults to 786432.

Analyzer types

The /_api/analyzer endpoint supports a new Analyzer type in the Enterprise Edition:

  • geo_s2 (introduced in v3.10.5): Like the existing geojson Analyzer, but with an additional format property that can be set to "latLngDouble" (default), "latLngInt", or "s2Point".

Query API

The GET /_api/query/current and GET /_api/query/slow endpoints include a new numeric peakMemoryUsage attribute.


The GET /_api/query/current endpoint can return a new value "instantiating executors" as state in the query list.

Index API

Progress indication on the index generation

Introduced in: v3.10.13, v3.11.7

The GET /_api/index endpoint may now include a progress attribute for the elements in the indexes array. For every index that is currently being created, it indicates the progress of the index generation (in percent).

To return indexes that are not yet fully built but are in the building phase, add the withHidden=true query parameter to the call of the endpoint. Note that this includes internal indexes in the response as well, such as arangosearch indexes.

curl "http://localhost:8529/_api/index?collection=myCollection&withHidden=true"
Restriction of indexable fields

It is now forbidden to create indexes that cover fields whose attribute names start or end with : , for example, fields: ["value:"]. This notation is reserved for internal use.

Existing indexes are not affected but you cannot create new indexes with a preceding or trailing colon using the POST /_api/index endpoint.

Inverted indexes

Introduced in: v3.10.2

Inverted indexes support new caching options in the Enterprise Edition.

  • A new cache option for inverted indexes as the default (boolean, default: false) or for specific fields (boolean, default: the value of the top-level cache option) to always cache field normalization values and Geo Analyzer auxiliary data in memory.

  • A new cache option per object in the definition of the storedValues elements to always cache stored values in memory (boolean, default: false).

  • A new cache option in the primarySort property to always cache the primary sort columns in memory (boolean, default: false).

  • A new primaryKeyCache property for inverted indexes to always cache the primary key column in memory (boolean, default: false).

The POST /_api/index endpoint accepts these new options for inverted indexes and the GET /_api/index and GET /_api/index/<index-id> endpoints may return these options. The attributes are omitted in responses unless you enable the respective option.

View API

Views of the type arangosearch support new caching options in the Enterprise Edition.

Introduced in: v3.9.5, v3.10.2

  • A cache option for individual View links or fields (boolean, default: false).
  • A cache option in the definition of a storedValues View property (boolean, immutable, default: false).

Introduced in: v3.9.6, v3.10.2

  • A primarySortCache View property (boolean, immutable, default: false).
  • A primaryKeyCache View property (boolean, immutable, default: false).

The POST /_api/view endpoint accepts these new options for arangosearch Views, the GET /_api/view/<view-name>/properties endpoint may return these options, and you can change the cache View link/field property with the PUT /_api/view/<view-name>/properties and PATCH /_api/view/<view-name>/properties endpoints.

Introduced in: v3.10.3

You may use a shorthand notations on arangosearch View creation or the storedValues option, like ["attr1", "attr2"], instead of using an array of objects.

See the arangosearch Views Reference for details.

Pregel API

Four new endpoints have been added to the Pregel HTTP interface for the new persisted execution statistics for Pregel jobs:

  • GET /_api/control_pregel/history/{id} to retrieve the persisted execution statistics of a specific Pregel job
  • GET /_api/control_pregel/history to retrieve the persisted execution statistics of all currently active and past Pregel jobs
  • DELETE /_api/control_pregel/history/{id} to delete the persisted execution statistics of a specific Pregel job
  • DELETE /_api/control_pregel/history to delete the persisted execution statistics of all Pregel jobs

See Pregel HTTP API for details.

Cluster rebalance API

The POST /_admin/cluster/rebalance and PUT /_admin/cluster/rebalance endpoints support a new excludeSystemCollections option that lets you ignore system collections in the shard rebalance plan.

The /_admin/cluster/rebalance route (GET, POST, and PUT methods) returns a new totalShardsFromSystemCollections property in the shards object of the result with the number of leader shards from system collections. The adjacent totalShards property may not include system collections depending on the excludeSystemCollections option.

Explain API

Introduced in: v3.10.4

The POST /_api/explain endpoint for explaining AQL queries includes the following two new statistics in the stats attribute of the response now:

  • peakMemoryUsage (number): The maximum memory usage of the query during explain (in bytes)
  • executionTime (number): The (wall-clock) time in seconds needed to explain the query.

Metrics API

The following metric has been added in version 3.11:

LabelDescription
arangodb_search_num_primary_docsNumber of primary documents for current snapshot.

Introduced in: v3.10.7, v3.11.1

This new metric reports the number of RocksDB .sst files:

LabelDescription
rocksdb_total_sst_filesTotal number of RocksDB sst files, aggregated over all levels.

Introduced in: v3.10.7

The metrics endpoints include the following new file descriptors metrics:

  • arangodb_file_descriptors_current
  • arangodb_file_descriptors_limit

Introduced in: v3.8.9, v3.9.6, v3.10.2

The metrics endpoints include the following new traffic accounting metrics:

  • arangodb_client_user_connection_statistics_bytes_received
  • arangodb_client_user_connection_statistics_bytes_sent
  • arangodb_http1_connections_total

Introduced in: v3.9.6, v3.10.2

The metrics endpoints include the following new edge cache (re-)filling metrics:

  • rocksdb_cache_auto_refill_loaded_total
  • rocksdb_cache_auto_refill_dropped_total
  • rocksdb_cache_full_index_refills_total

Introduced in: v3.9.10, v3.10.5

The following metrics for write-ahead log (WAL) file tracking have been added:

LabelDescription
rocksdb_live_wal_filesNumber of live RocksDB WAL files.
rocksdb_wal_released_tick_flushLower bound sequence number from which WAL files need to be kept because of external flushing needs.
rocksdb_wal_released_tick_replicationLower bound sequence number from which WAL files need to be kept because of replication.
arangodb_flush_subscriptionsNumber of currently active flush subscriptions.

Introduced in: v3.10.5

The following metric for the number of replication clients for a server has been added:

LabelDescription
arangodb_replication_clientsNumber of currently connected/active replication clients.

Introduced in: v3.9.11, v3.10.6

The following metrics for diagnosing delays in cluster-internal network requests have been added:

LabelDescription
arangodb_network_dequeue_durationInternal request duration for the dequeue in seconds.
arangodb_network_response_durationInternal request duration from fully sent till response received in seconds.
arangodb_network_send_durationInternal request send duration in seconds.
arangodb_network_unfinished_sends_totalNumber of internal requests for which sending has not finished.

Introduced in: v3.10.7

The following metric stores the peak value of the rocksdb_cache_allocated metric:

LabelDescription
rocksdb_cache_peak_allocatedGlobal peak memory allocation of ArangoDB in-memory caches.

Introduced in: v3.11.2

The following metrics have been added about the LZ4 compression for values in the in-memory edge cache:

  • rocksdb_cache_edge_inserts_effective_entries_size_total
  • rocksdb_cache_edge_inserts_uncompressed_entries_size_total
  • rocksdb_cache_edge_compression_ratio

Introduced in: v3.10.11, v3.11.4

The following metrics have been added to improve the observability of in-memory cache subsystem:

  • rocksdb_cache_free_memory_tasks_total
  • rocksdb_cache_free_memory_tasks_duration_total
  • rocksdb_cache_migrate_tasks_total
  • rocksdb_cache_migrate_tasks_duration_total

Introduced in: v3.11.4

The following metrics have been added to improve the observability of in-memory edge cache:

  • rocksdb_cache_edge_compressed_inserts_total
  • rocksdb_cache_edge_empty_inserts_total
  • rocksdb_cache_edge_inserts_total

Introduced in: v3.11.5

The following metrics have been added to monitor and detect temporary or permanent connectivity issues as well as how many scheduler threads are in the detached state:

  • arangodb_network_connectivity_failures_coordinators
  • arangodb_network_connectivity_failures_dbservers_total
  • arangodb_scheduler_num_detached_threads

Log level API

Introduced in: v3.10.2

The GET /_admin/log/level and PUT /_admin/log/level endpoints support a new query parameter serverId, to forward log level get and set requests to a specific server. This makes it easier to adjust the log levels in clusters because DB-Servers require JWT authentication whereas Coordinators also support authentication using usernames and passwords.

Explain API

Introduced in: v3.10.4

The POST /_api/explain endpoint for explaining AQL queries includes the following two new statistics in the stats attribute of the response now:

  • peakMemoryUsage (number): The maximum memory usage of the query during explain (in bytes)
  • executionTime (number): The (wall-clock) time in seconds needed to explain the query.

Optimizer rule descriptions

Introduced in: v3.10.9, v3.11.2

The GET /_api/query/rules endpoint now includes a description attribute for every optimizer rule that briefly explains what it does.

Endpoints deprecated

The GET /_admin/database/target-version endpoint is deprecated in favor of the more general version API with the endpoint GET /_api/version. The endpoint will be removed in ArangoDB v3.12.

JavaScript API

Database creation

The db._createDatabase() method for creating a new database has changed. If the specified database name is invalid/illegal, it now returns the error code 1208 (ERROR_ARANGO_ILLEGAL_NAME). It previously returned 1229 (ERROR_ARANGO_DATABASE_NAME_INVALID) in this case.

This is a downwards-incompatible change, but unifies the behavior for database creation with the behavior of collection and View creation, which also return the error code 1208 in case the specified name is not allowed.

Index methods

Calling collection.dropIndex(...) or db._dropIndex(...) now raises an error if the specified index does not exist or cannot be dropped (for example, because it is a primary index or edge index). The methods previously returned false. In case of success, they still return true.

You can wrap calls to these methods with a try { ... } block to catch errors, for example, in arangosh or in Foxx services.

AQL queries

When you use e.g. the db._query() method to execute an AQL query against an arangosearch View while it is still in the process of being built, the query now includes a warning message that the results may not be complete due to the ongoing indexing process of the View.

The error code associated with this warning is 1240 (ERROR_ARANGO_INCOMPLETE_READ).


Introduced in: v3.9.11, v3.10.7

If you specify collections that don’t exist in the options of AQL graph traversals (vertexCollections, edgeCollections), queries now fail. In previous versions, unknown vertex collections were ignored, and the behavior for unknown edge collections was undefined.

Additionally, queries fail if you specify a document collection or View in edgeCollections.

Pregel module

Two new methods have been added to the @arangodb/pregel module:

  • history(...) to get the persisted execution statistics of a specific or all algorithm executions
  • removeHistory(...) to delete the persisted execution statistics of a specific or all algorithm executions
var pregel = require("@arangodb/pregel");
const execution = pregel.start("sssp", "demograph", { source: "vertices/V" });
const historyStatus = pregel.history(execution);
pregel.removeHistory();

See Distributed Iterative Graph Processing (Pregel) for details.

collection.iterate() deprecated

The collection.iterate() method is deprecated from v3.11.0 onwards and will be removed in a future version.

@arangodb/request certificate validation

Introduced in: v3.11.11

The @arangodb/request module now supports two additional options for making HTTPS requests:

  • verifyCertificates (optional): if set to true, the server certificate of the remote server is verified using the default certificate store of the system. Default: false.
  • verifyDepth (optional): limit the maximum length of the certificate chain that counts as valid. Default: 10.