承接软件外包开发, 爬虫定制,邮箱/新浪微博/淘宝等网站 内容爬取 .24小时在线QQ:3232937716
Confluence是一个专业的企业知识管理与协同软件,一个专业的wiki
1.安装
JDBC的url:
因为mysql版本是5.7,所以要去掉engine部分,这里是大坑,尼玛
jdbc:mysql://192.168.20.211:3306/confluence?useUnicode=true&characterEncoding=utf8
获取space的content
curl -u admin:123456 http://10.208.231.165/rest/api/content?spaceKey=pay&limit=10&start=20 (最多500返回500条)
通过名称获取单个page的信息(可以获取该page的id), confluence里不允许添加重名的文章,所以不用担心重名
kwargs = {'url' : 'http://192.168.20.211:10019/rest/api/content',
'method' : 'GET',
'params' : 'spaceKey=FISH&title=fishtest',
'headers' : dictHeader}
获取子contenet(先通过标题获取id)
curl -u admin:123456 http://192.168.20.211:10019/rest/api/content/720903/child?expand=page&start=20&limit=10
获取文章内容
curl -u admin:123456 http://192.168.20.211:10019/rest/api/content/1015810?expand=space,body.view,version,container
curl -u admin:123456 http://192.168.20.211:10019/rest/api/content/1015810?expand=body.view