环境:
1、kafka用的是confluent最新版本5.0.0,对应kafka应该是2.0.0版本
2、debezium也是最新debezium-connector-mysql-0.8.1.Final-plugin.tar.gz
3、jdk是1.8.0
操作步骤:
1、将zookeeper、kafka都安装好,并把debezium的jar包放在插件目录(share/java)中,以防万一还特别配置了环境路径
2、schema-registry.properties和connect-avro-distributed.properties配置参数基本都是默认,locaohost已改成本地地址。
3、依次启动 zk、kafka、schema-registry都没问题,brober顺利加入zk中注册的集群。
4、用/bin/connect-distributed -daemon ./etc/schema-registry/connect-avro-distributed.properties启动connect-distributed也没问题。
5、但是用curl给connect传配置参数问题就来了,curl传输信息如下:
curl 'https://84.10.100.13:8083/connectors' -X POST -i -H "Content-Type: application/json" -d '{
"name":"kafka-mysql",
"config":{
"connector.clas":"io.debezium.connector.mysql.MySqlConnector",
"database.hostname":"84.10.100.4",
"database.port":"3306",
"database.user":"learn",
"database.password":"123456",
"database.server.id":"2",
"database.server.name":"mysql_4",
"table.whitelist":"learn.students,learn.tb_emp4",
"database.history.kafka.bootstrap.servers":"84.10.100.13:9092",
"database.history.kafka.topic":"mysql_42.history",
"include.schema.changes":"true",
"mode":"incrementing",
"database.history.skip.unparseable.ddl":"true"
"snapshot.mode":"schema_only_recovery",
"incrementing.column.name":"id",
},"tasks":[],"type":null}'
然后就报错了:
HTTP/1.1 500 Internal Server Error
Date: Thu, 08 Nov 2018 00:03:53 GMT
Content-Type: application/json
Content-Length: 476
Server: Jetty(9.4.11.v20180605)
{"error_code":500,"message":"Unexpected character ('\"' (code 34)): was expecting comma to separate Object entries\n at [Source: (org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream); line: 17, column: 2]\n at [Source: (org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream); line: 16, column: 41] (through reference chain:
org.apache.kafka.connect.runtime.rest.entities.CreateConnectorRequest[\"config\"])"}
麻烦大神看看该怎么解决,这个问题已经困扰很久了~多谢!
ps:rest api是不是也要启?但貌似启不起来。。。。。
你的答案