Hi! First, thank you for your wonderful works.
I finish training the RetinaNet with COCO dataset as you instructed.
I want to train my own dataset with RetinaNet or another baseline models.
I look at the inside code structure and figure out that all model configurations are defined in *.yaml file and train_net.py read the *.yaml file and construct the database from .jason file in COCO annotations directory.
So if I want to train my own dataset, the only way is to generate the .json file similar with COCO annotations?
@nonstop1962: yes, the recommended way is to convert your dataset to the COCO json annotation format. For bounding boxes, this can usually be done in < 100 lines of Pythons. Of course you could make arbitrary modification to the Detectron code to support custom formats, but that's probably harder and more prone to issues with missed corner cases.
Thank you for your answer!
Thank you for @rbgirshick !
Ok, so Can I get training dataset if I want to segment T-Shirt ?
for example, JSON get for uses of COCO API.
I think that this platform can segmentation.
ok??
Thank you~!
Hi,
for those who need it, here's a script for converting xml pascal voc annotations to coco json format : https://github.com/gamcoh/Object-Detection-Tools/blob/master/pascal_voc_xml2coco_json.py
Thank gamcoh
Most helpful comment
@nonstop1962: yes, the recommended way is to convert your dataset to the COCO json annotation format. For bounding boxes, this can usually be done in < 100 lines of Pythons. Of course you could make arbitrary modification to the Detectron code to support custom formats, but that's probably harder and more prone to issues with missed corner cases.