TiCDC is TiDB's change data capture framework. It replicates change data to various downstream systems, such as MySQL protocol-compatible databases and Kafka.
See a detailed introduction to the TiCDC architecture.
To check the source code, run test cases and build binaries, you can simply run:
$ make cdc
$ make test
Note that TiCDC supports building with the Go version Go >= 1.23
.
When TiCDC is built successfully, you can find binary in the bin
directory. Instructions for unit test and integration test can be found in Running tests.
You can set up a CDC cluster for replication test manually as following:
- Set up a TiDB cluster.
- Start a CDC cluster, which contains one or more CDC servers. The command to start on CDC server
is
cdc server --pd http://10.0.10.25:2379
, wherehttp://10.0.10.25:2379
is the client-url of pd-server. - Start a replication changefeed by
cdc cli changefeed create --pd http://10.0.10.25:2379 --start-ts 413105904441098240 --sink-uri mysql://root:[email protected]:3306/
. The TSO is TiDBtimestamp oracle
. If it is not provided or set to zero, the TSO of start time will be used.
For details, see Deploy TiCDC.
# Start TiDB cluster
$ docker-compose -f ./deployments/ticdc/docker-compose/docker-compose-mysql.yml up -d
# Attach to control container to run TiCDC
$ docker exec -it ticdc_controller sh
# Start to feed the changes on the upstream tidb, and sink to the downstream tidb
$ ./cdc cli changefeed create --pd http://upstream-pd:2379 --sink-uri mysql://root@downstream-tidb:4000/
# Exit the control container
$ exit
# Load data to the upstream tidb
$ sysbench --mysql-host=127.0.0.1 --mysql-user=root --mysql-port=4000 --mysql-db=test oltp_insert --tables=1 --table-size=100000 prepare
# Check sync progress
$ mysql -h 127.0.0.1 -P 5000 -u root -e "SELECT COUNT(*) FROM test.sbtest1"
We welcome and greatly appreciate contributions. See CONTRIBUTING.md for details on submitting patches and the contribution workflow.