ArangoDB v3.12 is under development and not released yet. This documentation is not final and potentially incomplete.
Collections allow you to store documents and you can use them to group records of similar kinds together
A collection can contain a set of documents, similar to how a folder contains files. You can store documents with varying data structures in a single collection, but a collection is typically used to only store one type of entities. For example, you can use one collection for products, another for customers, and yet another for orders.
The regular type of collection is a document collection. If you use document collections for a graph, then they are referred to as vertex collections.
To store connections between the vertices of a graph, you need to use edge collections. The documents they contain have a
_toattribute to reference documents by their ID.
Collection that are used internally by ArangoDB are prefixed with an underscore (like
_users) and are called system collections. They can be document collections as well as edge collections.
You need to specify whether you want a document collection or an edge collection when you create the collection. The type cannot be changed later.
You can give each collection a name to identify and access it. The name needs to be unique within a database, but not globally for the entire ArangoDB instance.
The namespace for collections is shared with Views. There cannot exist a collection and a View with the same name in the same database.
The collection name needs to be a string that adheres to either the traditional
or the extended naming constraints. Whether the former or the latter is
active depends on the
--database.extended-names startup option.
The extended naming constraints are used if enabled, allowing many special and
UTF-8 characters in database, collection, View, and index names. If set to
false (default), the traditional naming constraints are enforced.
The restrictions for collection names are as follows:
For the traditional naming constraints:
- The names must only consist of the letters
Z(both in lower and upper case), the digits
9, and underscore (
_) and dash (
-) characters. This also means that any non-ASCII names are not allowed.
- Names of user-defined collections must always start with a letter. System collection names must start with an underscore. You should not use system collection names for your own collections.
- The maximum allowed length of a name is 256 bytes.
- Collection names are case-sensitive.
- The names must only consist of the letters
For the extended naming constraints:
- Names can consist of most UTF-8 characters, such as Japanese or Arabic letters, emojis, letters with accentuation. Some ASCII characters are disallowed, but less compared to the traditional naming constraints.
- Names cannot contain the characters
:at any position, nor any control characters (below ASCII code 32), such as
- Spaces are accepted, but only in between characters of the name. Leading or trailing spaces are not allowed.
_(underscore) and the numeric digits
9are not allowed as first character, but at later positions.
- Collection names are case-sensitive.
- Collection names containing UTF-8 characters must be NFC-normalized. Non-normalized names are rejected by the server.
- The maximum length for a collection name is 256 bytes after normalization. As a UTF-8 character may consist of multiple bytes, this does not necessarily equate to 256 characters.
Example collection names that can be used with the extended naming constraints:
abc? <> 123!
While it is possible to change the value of the
--database.extended-names option from
true to enable
extended names, the reverse is not true. Once the extended names have been
enabled, they remain permanently enabled so that existing databases,
collections, Views, and indexes with extended names remain accessible.
Please be aware that dumps containing extended names cannot be restored into older versions that only support the traditional naming constraints. In a cluster setup, it is required to use the same naming constraints for all Coordinators and DB-Servers of the cluster. Otherwise, the startup is refused. In DC2DC setups, it is also required to use the same naming constraints for both datacenters to avoid incompatibilities.
You can rename collections (except in cluster deployments). This changes the collection name, but not the collection identifier.
A collection identifier lets you refer to a collection in a database, similar to the name. It is a string value and is unique within a database. Unlike collection names, ArangoDB assigns collection IDs automatically and you have no control over them.
ArangoDB internally uses collection IDs to look up collections. However, you should use the collection name to access a collection instead of its identifier.
ArangoDB uses 64-bit unsigned integer values to maintain collection IDs internally. When returning collection IDs to clients, ArangoDB returns them as strings to ensure the identifiers are not clipped or rounded by clients that do not support big integers. Clients should treat the collection IDs returned by ArangoDB as opaque strings when they store or use them locally.
ArangoDB allows using key generators for each collection. Key generators
have the purpose of auto-generating values for the
_key attribute of a document
if none was specified by the user. By default, ArangoDB uses the traditional
key generator. The traditional key generator auto-generates key values that
are strings with ever-increasing numbers. The increment values it uses are
Contrary, the auto-increment key generator auto-generates deterministic key
values. Both the start value and the increment value can be defined when the
collection is created. The default start value is
0 and the default increment
1, meaning the key values it creates by default are:
1, 2, 3, 4, 5, …
When creating a collection with the auto-increment key generator and an
5, the generated keys would be:
1, 6, 11, 16, 21, …
The auto-increment values are increased and handed out on each document insert attempt. Even if an insert fails, the auto-increment value is never rolled back. That means there may exist gaps in the sequence of assigned auto-increment values if inserts fails.
This call creates a new collection called
Returns the specified collection.
For example, assume that the collection identifier is
7254820 and the name is
demo, then the collection can be accessed as follows:
If no collection with such a name exists, then
null is returned.
There is a short-cut that you can use:
db.collection-name // or db["collection-name"]
This property access returns the collection named
Distributed ArangoDB setups offer synchronous replication,
which means that there is the option to replicate all data
automatically within an ArangoDB cluster. This is configured for sharded
collections on a per-collection basis by specifying a replication factor.
A replication factor of
k means that altogether
k copies of each shard are
kept in the cluster on
k different servers, and are kept in sync. That is,
every write operation is automatically replicated on all copies.
This is organized using a leader/follower model. At all times, one of the servers holding replicas for a shard is “the leader” and all others are “followers”, this configuration is held in the Agency (see Cluster for details of the ArangoDB cluster architecture). Every write operation is sent to the leader by one of the Coordinators, and then replicated to all followers before the operation is reported to have succeeded. The leader keeps a record of which followers are currently in sync. In case of network problems or a failure of a follower, a leader can and will drop a follower temporarily after 3 seconds, such that service can resume. In due course, the follower will automatically resynchronize with the leader to restore resilience.
If a leader fails, the cluster Agency automatically initiates a failover routine after around 15 seconds, promoting one of the followers to leader. The other followers (and the former leader, when it comes back), automatically resynchronize with the new leader to restore resilience. Usually, this whole failover procedure can be handled transparently for the Coordinator, such that the user code does not even see an error message.
This fault tolerance comes at a cost of increased latency.
Each write operation needs an additional network roundtrip for the
synchronous replication of the followers (but all replication operations
to all followers happen concurrently). Therefore, the default replication
1, which means no replication.
For details on how to switch on synchronous replication for a collection,