ArangoDB v3.10 reached End of Life (EOL) and is no longer supported.
This documentation is outdated. Please see the most recent stable version.
Single instance vs. Cluster deployments
Similarities and differences in behavior between single servers and clusters
In general, a single server configuration and a cluster configuration of ArangoDB behave very similarly. However, there are differences due to the different nature of these setups. This can lead to a discrepancy in behavior between these two configurations. A summary of potential differences follows.
Migrating from a Single Instance to a Cluster
To migrate from a Single Instance to a Cluster you will need to take a backup from the Single Instance and restore it into the Cluster with the tools arangodump and arangorestore.
If you have developed your application using a Single Instance and you would like to use a Cluster now, before upgrading your production system please test your application with the Cluster first.
If both your Single Instance and Cluster are running on the same machine, they should have distinct data directories. It is not possible to start a Cluster on the data directory of a Single Instance.
Locking and dead-lock prevention
In a single server configuration all data is local and dead-locks can easily be detected. In a cluster configuration data is distributed to many servers and some conflicts cannot be detected easily. Therefore we have to do some things (like locking shards) sequentially and in a strictly predefined order, to avoid dead-locks in this way by design.
Document Keys
In a cluster the autoincrement key generator is supported for single-sharded collections. It cannot be used for collections with more than one shard.
Indexes
Unique constraints
There are restrictions on the allowed unique constraints in a cluster. Any unique constraint which cannot be checked locally on a per shard basis is not allowed in a cluster setup. More concretely, unique constraints in a cluster are only allowed in the following situations:
- there is always a unique constraint on the primary key
_key
, if the collection is not sharded by_key
, then_key
must be automatically generated by the database and cannot be prescribed by the client - the collection has only one shard, in which case the same unique constraints are allowed as in the single instance case
- if the collection is sharded by exactly one other attribute than
_key
, then there can be a unique constraint on that attribute
These restrictions are imposed, because otherwise checking for a unique constraint violation would involve checking with all shards, which would have a considerable performance impact.
Renaming
It is not possible to rename collections or views in a cluster.
AQL
The AQL syntax for single server and cluster is identical. However, there is one additional requirement (regarding with) and possible performance differences.
WITH
The WITH
keyword in AQL must be used to declare which collections
are used in the AQL. For most AQL requires the required collections
can be deduced from the query itself. However, with traversals this is
not possible, if edge collections are used directly. See
AQL WITH operation
for details. The WITH
statement is not necessary when using named graphs
for the traversals.
There is a --query.require-with
startup option
to make single server installations also require the WITH
statements
in the same places where are cluster installation would. This option
is false by default, but be set to true to remove this behavior
difference between single servers and clusters.
Performance
Performance of AQL queries can vary between single server and cluster.
If a query can be distributed to many DB-Server and executed in
parallel then cluster performance can be better. For example, if you
do a distributed COLLECT
aggregation or a distributed FILTER
operation.
On the other hand, if you do a join or a traversal and the data is not local to one server then the performance can be worse compared to a single server. This is especially true for traversal if the data is not sharded with care. Our SmartGraph feature helps with this for traversals.
Single document operations can have a higher throughput in cluster but will also have a higher latency, due to an additional network hop from Coordinator to DB-Server.
Any operation that needs to find documents by anything else but the shard key will have to fan out to all shards, so it will be a lot slower than when referring to the documents using the shard key. Optimized lookups by shard key can only be used for equality lookups, e.g. not for range lookups.
Memory usage
Some query results must be built up in memory on a Coordinator, for example if a dataset needs to be sorted on the fly. This can relatively easily overwhelm a Coordinator if the dataset is sharded across multiple DB-Servers. Use indexes and streaming cursors to circumvent this problem.
Transactions
Using a single instance of ArangoDB, multi-document / multi-collection queries are guaranteed to be fully ACID. This is more than many other NoSQL database systems support. In cluster mode, single-document operations are also fully ACID. Multi-document / multi-collection queries in a cluster are not ACID, which is equally the case for competing database systems. See Transactions for details.
Batch operations for multiple documents in the same collection are only fully transactional in a single instance.
SmartGraphs
In SmartGraphs there are restrictions on the values of the _key
attributes. Essentially, the _key
attribute values for vertices must
be prefixed with the string value of the SmartGraph attribute and a
colon. A similar restriction applies for the edges.
Foxx
Foxx apps run on the Coordinators of a cluster. Since Coordinators are stateless, one must not use regular file accesses in Foxx apps in a cluster.
Agency
A cluster deployment needs a central, RAFT-based key/value store called “the Agency” to keep the current cluster configuration and manage failover. Being RAFT-based, this is a real-time system. If your servers running the Agency instances (typically three or five) receive too much load, the RAFT protocol stops working and the whole stability of the cluster is endangered. If you foresee this problem, run the Agency instances on separate nodes. All this is not necessary in a single server deployment.
Dump/Restore
In a cluster, the arangodump
utility cannot guarantee a consistent snapshot
across multiple shards or even multiple collections. In a single server,
arangodump
produces a consistent snapshot.
In the Enterprise Edition, there is an additional utility
arangobackup
and an HTTP API for Hot Backups
to create consistent cluster snapshots.