published the release of a distributed replicated block device DRBD 9.2.0 , which allows you to realize the semblance of the RAID-1 array formed from several disks of different cars united over the network (mirroring over the network). The system is decorated in the form of a linux and module under the GPLV2 license. The DRBD 9.2.0 branch can be used for transparent replacement of DRBD 9.x.x and is completely compatible at the level of the protocol, configuration and utility files.
DRBD makes it possible to combine the drives of the cluster into a single fault -resistant storage. For applications and systems, such a storage looks like the same block device for all systems. When using DRBD, all operations with a local disk are sent to other nodes and synchronized with discs of other machines. In case of failure of one unit, the storage will automatically continue to work at the expense of the remaining nodes. With the resumption of the availability of the flexible node, its condition will be automatically brought to the current type.
The cluster forming the storage may include several dozen components located both in the local network and territorially spaced in different data processing centers. Synchronization in such branched storage facilities is performed using MESH networks (data spread along the chain from node to node). The replication of nodes can be carried out both in synchronous mode and in asynchronous. For example, locally placed nodes can use synchronous replication, and asynchronous replication with additional compression and traffic encryption can be used for removal to remote placed sites.
In the new issue:
- Reduces are reduced for mirroring requests for recording. More dense integration with a network stack made it possible to reduce the number of switches of the planner context.
- The competition between the input/output of applications and the input/output of resinchronization was reduced by optimizing internships for resinchronization of extreme.
- The productivity of resinchronization on the backens is significantly increased, which uses the dynamic release of the place in the storage (“Thin Provisioning”). The performance was raised thanks to the combination of TRIM/Discard operations, which are performed much longer than the usual recording operations.
- Added support for network names (Network Namespaces), which made it possible to realize the possibility of integration with Kubernetes to transmit network traffic of replicas through a separate network attached to containers, instead of a host of hosting.
- added TRANSPORT_RDMA module for use as an Infiniband/Roce transport instead of TCP/IP on top of Ethernet. Using new transport allows you to reduce delays, reduce the load on CPU and ensure data without unnecessary copying operations (Zero-Copy).