ArangoDB v3.10 reached End of Life (EOL) and is no longer supported.
This documentation is outdated. Please see the most recent stable version.
Replication dump commands
Inventory
The inventory method can be used to query an ArangoDB database’s current set of collections plus their indexes. Clients can use this method to get an overview of which collections are present in the database. They can use this information to either start a full or a partial synchronization of data, e.g. to initiate a backup or the incremental data synchronization.
Get a replication inventory
Returns the array of collections and their indexes, and the array of Views available. These
arrays can be used by replication clients to initiate an initial synchronization with the
server.
The response will contain all collections, their indexes and views in the requested database
if global
is not set, and all collections, indexes and views in all databases if global
is set.
In case global
is not set, it is possible to restrict the response to a single collection
by setting the collection
parameter. In this case the response will contain only information
about the requested collection in the collections
array, and no information about views
(i.e. the views
response attribute will be an empty array).
The response will contain a JSON object with the collections
, views
, state
and
tick
attributes.
collections
is an array of collections with the following sub-attributes:
parameters
: the collection propertiesindexes
: an array of the indexes of the collection. Primary indexes and edge indexes are not included in this array.
The state
attribute contains the current state of the replication logger. It
contains the following sub-attributes:
running
: whether or not the replication logger is currently active. Note: since ArangoDB 2.2, the value will always betrue
lastLogTick
: the value of the last tick the replication logger has writtentime
: the current time on the server
views
is an array of available views.
Replication clients should note the lastLogTick
value returned. They can then
fetch collections’ data using the dump method up to the value of lastLogTick, and
query the continuous replication log for log events after this tick value.
To create a full copy of the collections on the server, a replication client can execute these steps:
call the
/inventory
API method. This returns thelastLogTick
value and the array of collections and indexes from the server.for each collection returned by
/inventory
, create the collection locally and call/dump
to stream the collection data to the client, up to the value oflastLogTick
. After that, the client can create the indexes on the collections as they were reported by/inventory
.
If the clients wants to continuously stream replication log events from the logger server, the following additional steps need to be carried out:
the client should call
/_api/wal/tail
initially to fetch the first batch of replication events that were logged after the client’s call to/inventory
.The call to
/_api/wal/tail
should use afrom
parameter with the value of thelastLogTick
as reported by/inventory
. The call to/_api/wal/tail
will return thex-arango-replication-lastincluded
header which will contain the last tick value included in the response.the client can then continuously call
/_api/wal/tail
to incrementally fetch new replication events that occurred after the last transfer.Calls should use a
from
parameter with the value of thex-arango-replication-lastincluded
header of the previous response. If there are no more replication events, the response will be empty and clients can go to sleep for a while and try again later.
DBserver
query parameter which must be an ID of a DB-Server.
The very same request is forwarded synchronously to that DB-Server.
It is an error if this attribute is not bound in the Coordinator case.global
parameter the top-level object contains a key databases
under which each key represents a database name, and the value conforms to the above description.Batch
The batch method will create a snapshot of the current state that then can be dumped.
Create a new dump batch
Creates a new dump batch and returns the batch’s id.
The response is a JSON object with the following attributes:
id
: the id of the batchlastTick
: snapshot tick value using when creating the batchstate
: additional leader state information (only present if thestate
URL parameter was set totrue
in the request)
DBserver
query parameter which must be an ID of a DB-Server.
The very same request is forwarded synchronously to that DB-Server.
It is an error if this attribute is not bound in the Coordinator case.Delete an existing dump batch
Deletes the existing dump batch, allowing compaction and cleanup to resume.
DBserver
query parameter which must be an ID of a DB-Server.
The very same request is forwarded synchronously to that DB-Server.
It is an error if this attribute is not bound in the Coordinator case.Extend the TTL of a dump batch
Extends the time-to-live (TTL) of an existing dump batch, using the batch’s ID and the provided TTL value.
If the batch’s TTL can be extended successfully, the response is empty.
DBserver
query parameter which must be an ID of a DB-Server.
The very same request is forwarded synchronously to that DB-Server.
It is an error if this attribute is not bound in the Coordinator case.Dump
The dump method can be used to fetch data from a specific collection. As the results of the dump command can be huge, dump may not return all data from a collection at once. Instead, the dump command may be called repeatedly by replication clients until there is no more data to fetch. The dump command will not only return the current documents in the collection, but also document updates and deletions.
Note that the dump method will only return documents, updates, and deletions from a collection’s journals and datafiles. Operations that are stored in the write-ahead log only will not be returned. In order to ensure that these operations are included in a dump, the write-ahead log must be flushed first.
To get to an identical state of data, replication clients should apply the individual parts of the dump results in the same order as they are provided.
Get a replication dump
Returns the data from a collection for the requested range.
The chunkSize
query parameter can be used to control the size of the result.
It must be specified in bytes. The chunkSize
value will only be honored
approximately. Otherwise a too low chunkSize
value could cause the server
to not be able to put just one entry into the result and return it.
Therefore, the chunkSize
value will only be consulted after an entry has
been written into the result. If the result size is then greater than
chunkSize
, the server will respond with as many entries as there are
in the response already. If the result size is still less than chunkSize
,
the server will try to return more data if there’s more data left to return.
If chunkSize
is not specified, some server-side default value will be used.
The Content-Type
of the result is application/x-arango-dump
. This is an
easy-to-process format, with all entries going onto separate lines in the
response body.
Each line itself is a JSON object, with at least the following attributes:
tick
: the operation’s tick attributekey
: the key of the document/edge or the key used in the deletion operationrev
: the revision id of the document/edge or the deletion operationdata
: the actual document/edge data for types 2300 and 2301. The full document/edge data will be returned even for updates.type
: the type of entry. Possible values fortype
are:2300: document insertion/update
2301: edge insertion/update
2302: document/edge deletion
Get the replication revision tree
Returns the Merkle tree associated with the specified collection.
The result will be JSON/VelocyPack in the following format:
{
version: <Number>,
branchingFactor: <Number>
maxDepth: <Number>,
rangeMin: <String, revision>,
rangeMax: <String, revision>,
nodes: [
{ count: <Number>, hash: <String, revision> },
{ count: <Number>, hash: <String, revision> },
...
{ count: <Number>, hash: <String, revision> }
]
}
At the moment, there is only one version, 1, so this can safely be ignored for now.
Each <String, revision>
value type is a 64-bit value encoded as a string of
11 characters, using the same encoding as our document _rev
values. The
reason for this is that 64-bit values cannot necessarily be represented in full
in JavaScript, as it handles all numbers as floating point, and can only
represent up to 2^53-1
faithfully.
The node count should correspond to a full tree with the given maxDepth
and
branchingFactor
. The nodes are laid out in level-order tree traversal, so the
root is at index 0
, its children at indices [1, branchingFactor]
, and so
on.
Rebuild the replication revision tree
Rebuilds the Merkle tree for a collection.
If successful, there will be no return body.
List document revision IDs within requested ranges
Returns the revision IDs of documents within the requested ranges.
The body of the request should be JSON/VelocyPack and should consist of an array of pairs of string-encoded revision IDs:
[
[<String, revision>, <String, revision>],
[<String, revision>, <String, revision>],
...
[<String, revision>, <String, revision>]
]
In particular, the pairs should be non-overlapping, and sorted in ascending order of their decoded values.
The result will be JSON/VelocyPack in the following format:
{
ranges: [
[<String, revision>, <String, revision>, ... <String, revision>],
[<String, revision>, <String, revision>, ... <String, revision>],
...,
[<String, revision>, <String, revision>, ... <String, revision>]
]
resume: <String, revision>
}
The resume
field is optional. If specified, then the response is to be
considered partial, only valid through the revision specified. A subsequent
request should be made with the same request body, but specifying the resume
URL parameter with the value specified. The subsequent response will pick up
from the appropriate request pair, and omit any complete ranges or revisions
which are less than the requested resume revision. As an example (ignoring the
string-encoding for a moment), if ranges [1, 3], [5, 9], [12, 15]
are
requested, then a first response may return [], [5, 6]
with a resume point of
7
and a subsequent response might be [8], [12, 13]
.
If a requested range contains no revisions, then an empty array is returned. Empty ranges will not be omitted.
Each <String, revision>
value type is a 64-bit value encoded as a string of
11 characters, using the same encoding as our document _rev
values. The
reason for this is that 64-bit values cannot necessarily be represented in full
in JavaScript, as it handles all numbers as floating point, and can only
represent up to 2^53-1
faithfully.
Get documents by revision
Returns documents by revision for replication.
The body of the request should be JSON/VelocyPack and should consist of an array of string-encoded revision IDs:
[
<String, revision>,
<String, revision>,
...
<String, revision>
]
In particular, the revisions should be sorted in ascending order of their decoded values.
The result will be a JSON/VelocyPack array of document objects. If there is no document corresponding to a particular requested revision, an empty object will be returned in its place.
The response may be truncated if it would be very long. In this case, the response array length will be less than the request array length, and subsequent requests can be made for the omitted documents.
Each <String, revision>
value type is a 64-bit value encoded as a string of
11 characters, using the same encoding as our document _rev
values. The
reason for this is that 64-bit values cannot necessarily be represented in full
in JavaScript, as it handles all numbers as floating point, and can only
represent up to 2^53-1
faithfully.
Start replication from a remote endpoint
Starts a full data synchronization from a remote endpoint into the local ArangoDB database.
The sync method can be used by replication clients to connect an ArangoDB database to a remote endpoint, fetch the remote list of collections and indexes, and collection data. It will thus create a local backup of the state of data at the remote ArangoDB database. sync works on a per-database level.
sync will first fetch the list of collections and indexes from the remote endpoint. It does so by calling the inventory API of the remote database. It will then purge data in the local ArangoDB database, and after start will transfer collection data from the remote database to the local ArangoDB database. It will extract data from the remote database by calling the remote database’s dump API until all data are fetched.
In case of success, the body of the response is a JSON object with the following attributes:
collections: an array of collections that were transferred from the endpoint
lastLogTick: the last log tick on the endpoint at the time the transfer was started. Use this value as the from value when starting the continuous synchronization later.
WARNING: calling this method will synchronize data from the collections found on the remote endpoint to the local ArangoDB database. All data in the local collections will be purged and replaced with data from the endpoint.
Use with caution!
incremental boolean
if set to true, then an incremental synchronization method will be used for synchronizing data in collections. This method is useful when collections already exist locally, and only the remaining differences need to be transferred from the remote endpoint. In this case, the incremental synchronization can be faster than a full synchronization. The default value is false, meaning that the complete data from the remote collection will be transferred.
initialSyncMaxWaitTime integer
the maximum wait time (in seconds) that the initial synchronization will wait for a response from the leader when fetching initial collection data. This wait time can be used to control after what time the initial synchronization will give up waiting for a response and fail. This value will be ignored if set to 0.
Get the cluster collections and indexes
Returns the array of collections and indexes available on the cluster.
The response will be an array of JSON objects, one for each collection.
Each collection contains exactly two keys, parameters
and indexes
.
This information comes from Plan/Collections/{DB-Name}/*
in the Agency,
just that the indexes
attribute there is relocated to adjust it to
the data format of arangodump.