Detectron: The RLE or Polygon format of "segmentation“ for extending to coco dataset

Created on 2 Feb 2018  ·  38Comments  ·  Source: facebookresearch/Detectron

Hi Detectron,

Recently I tried to add my custom coco data to run Detectron and encountered the following issues.
(1) "segmentation" in coco data like below,

{"segmentation": [[499.71, 397.28,......342.71, 172.31]], "area": 43466.12825, "iscrowd": 0, "image_id": 182155, "bbox": [338.89, 51.69, 205.82, 367.61], "category_id": 1, "id": 1248258},

{"segmentation": {"counts": [66916, 6, 587,..... 1, 114303], "size": [594, 640]}, "area": 6197, "iscrowd": 1, "image_id": 284445, "bbox": [112, 322, 335, 94], "category_id": 1, "id": 9.001002844e+11},
The first format of "segmentation" is polygon and the second is need to encode/decode for RLE format.

Above formats can run on Detectron.

(2) I added a new category , and generated a new RLE format for "segmentation" field via coco api encode()/decode() mask.
I generated data like this.

"segmentation": [{"counts": "mng=1fb02O1O1O001N2O001O1O0O2O1O1O001N2O001O1O0O2O1O001O1O1O010000O01000O010000O01000O01000O01000O01N2N2M2O2N2N1O2N2O001O10O?B000O10O1O001^OQ^O9Pb0EQ^O;Wb0OO01O1O1O001O1N2N`jT3","size": [600,1000]}]

I found the bolded characters is different from the original coco "segmentation" json format although it can run on MatterPort's implementation to Mask-RCNN.

Also, I tried to modify some Detectron's code to meet my requirement, but very difficult to me because lots of code need to change.

Could you give me some suggestions to run my custom data?

Thanks.

community help wanted

Most helpful comment

@topcomma
Maybe you can try to convert mask to polys.

labels_info = []
for mask in mask_list:
    # opencv 3.2
    mask_new, contours, hierarchy = cv2.findContours((mask).astype(np.uint8), cv2.RETR_TREE,
                                                        cv2.CHAIN_APPROX_SIMPLE)
    # before opencv 3.2
    # contours, hierarchy = cv2.findContours((mask).astype(np.uint8), cv2.RETR_TREE,
    #                                                    cv2.CHAIN_APPROX_SIMPLE)
    segmentation = []

    for contour in contours:
        contour = contour.flatten().tolist()
        # segmentation.append(contour)
        if len(contour) > 4:
            segmentation.append(contour)
    if len(segmentation) == 0:
        continue
    # get area, bbox, category_id and so on
    labels_info.append(
        {
            "segmentation": segmentation,  # poly
            "area": area,  # segmentation area
            "iscrowd": 0,
            "image_id": index,
            "bbox": [x1, y1, bbox_w, bbox_h],
            "category_id": category_id,
            "id": label_id
        },
    )

All 38 comments

I had a similar issue: some of the functions in lib/utils/segms.py expect segmentations to be in “poly” format and break when those are provided in RLE.
It is inconvenient but seems to be in line with the spec for non-crowded regions(iscrowd=0):

The segmentation format depends on whether the instance represents a single object (iscrowd=0 in which case polygons are used) or a collection of objects (iscrowd=1 in which case RLE is used).

[1] http://cocodataset.org/#download, section "4.1. Object Instance Annotations”

The workaround for me was to transform to “poly” format, which is essentially a list of (x,y) vertexes.

-Lesha.

On 2. Feb 2018, at 06:41, Hu Xingui notifications@github.com wrote:

Hi Detectron,

Recently I tried to add my custom coco data to run Detectron and encountered the following issues.
(1) "segmentation" in coco data like below,

{"segmentation": [[499.71, 397.28,......342.71, 172.31]], "area": 43466.12825, "iscrowd": 0, "image_id": 182155, "bbox": [338.89, 51.69, 205.82, 367.61], "category_id": 1, "id": 1248258},

{"segmentation": {"counts": [66916, 6, 587,..... 1, 114303], "size": [594, 640]}, "area": 6197, "iscrowd": 1, "image_id": 284445, "bbox": [112, 322, 335, 94], "category_id": 1, "id": 9.001002844e+11},
The first format of "segmentation" is polygon and the second is need to encode/decode for RLE format.

Above formats can run on Detectron.

(2) I added a new category , and generated a new RLE format for "segmentation" field via coco api encode()/decode() mask.
I generated data like this.

"segmentation": [{"counts": "mng=1fb02O1O1O001N2O001O1O0O2O1O1O001N2O001O1O0O2O1O001O1O1O010000O01000O010000O01000O01000O01000O01N2N2M2O2N2N1O2N2O001O10O?B000O10O1O001^OQ^O9Pb0EQ^O;Wb0OO01O1O1O001O1N2N`jT3","size": [600,1000]}]

I found the bolded characters is different from the original coco "segmentation" json format although it can run on MatterPort's implementation to Mask-RCNN.

Also, I tried to modify some Detectron's code to meet my requirement, but very difficult to me because lots of code need to change.

Could you give me some suggestions to run my custom data?

Thanks.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@amokeev ,How to convert "RLE" to "poly" format to your workaround?

@amokeev , Are you sure the number in segmentation is (x,y) vertexes? For example "66916" ,a large number! Another, although I set "iscrowd" as "1" for RLE format, yet could not run on Detectron.

I interpret poly as list of polygons, defined by the vertexes, like [[x1,y1,x2,y2…xN,yN],…[x1,y1,x2,y2…xN,yN] ], where the coordinates are of the same scale as the image.
The masks, encoded this way, are shown correctly by CocoAPI [1]

But you may want to get “official” answer.

[1] https://github.com/cocodataset/cocoapi https://github.com/cocodataset/cocoapi

On 2. Feb 2018, at 09:18, Hu Xingui notifications@github.com wrote:

@amokeev https://github.com/amokeev , Are you sure the number in segmentation is (x,y) vertexes? For example "66916" ,a large number! Another, although I set "iscrowd" as "1" for RLE format, yet could not run on Detectron.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/facebookresearch/Detectron/issues/100#issuecomment-362516928, or mute the thread https://github.com/notifications/unsubscribe-auth/AFlh63ObnXg-DcaDKwIi3pB4Ppig464Hks5tQsTkgaJpZM4R2tN3.

@topcomma
Maybe you can try to convert mask to polys.

labels_info = []
for mask in mask_list:
    # opencv 3.2
    mask_new, contours, hierarchy = cv2.findContours((mask).astype(np.uint8), cv2.RETR_TREE,
                                                        cv2.CHAIN_APPROX_SIMPLE)
    # before opencv 3.2
    # contours, hierarchy = cv2.findContours((mask).astype(np.uint8), cv2.RETR_TREE,
    #                                                    cv2.CHAIN_APPROX_SIMPLE)
    segmentation = []

    for contour in contours:
        contour = contour.flatten().tolist()
        # segmentation.append(contour)
        if len(contour) > 4:
            segmentation.append(contour)
    if len(segmentation) == 0:
        continue
    # get area, bbox, category_id and so on
    labels_info.append(
        {
            "segmentation": segmentation,  # poly
            "area": area,  # segmentation area
            "iscrowd": 0,
            "image_id": index,
            "bbox": [x1, y1, bbox_w, bbox_h],
            "category_id": category_id,
            "id": label_id
        },
    )

@amokeev ,
@Sundrops

Thanks for your suggestions.
Will try.

@Sundrops , As your method to convert, can get the result of "poly" list. Thank you very much! But I still not know why the coordinate in COCO json file is large/small number such as "66916" or "1"?

@topcomma COCO annotations have two types of segmentation annotations

  1. polygon (object instance) [[499.71, 397.28,......342.71, 172.31]] vertexes
  2. uncompressed RLE (crowd). "segmentation": {"counts": [66916, 6, 587,..... 1, 114303], "size": [594, 640]}, 66916 represents the number of label 0.

The polygon and uncompressed RLE will be converted to compact RLE format withe the MaskApi.
The compact RLE format:
segmentation": [{"counts": "mng=1fb02O1O1O001N2O001O1O0O2O1O1O001N2O001O1O0O2O1O001O1O1O010000O01000O010000O01000O01000O01000O01N2N2M2O2N2N1O2N2O001O10O?B000O10O1O001^OQ^O9Pb0EQ^O;Wb0OO01O1O1O001O1N2N`jT3","size": [600,1000]}]

@Sundrops ,
Thanks for your great help.
My custom coco-like data can be trained on Detectron now.

@topcomma ,
i have the same problem as u .
As Sundrops 's method, i can't find the file to convert mask to polys.
Could you tell me which file?Thank you very much!

@lg12170226
You may refer coco stuff(https://github.com/nightrome/cocostuff) python code to implement it by yourself.
In the code base there is no file of the related annotation.

@topcomma : I have a raw image and N label images. Each label are stored in a single file, so I have N image for label. I want to train Mask RCNN in my own dataset, so I first need to convert to COCO format. Could you share the code how to convert it to COCO style? Thanks

Just wondering if there is a way to convert compressed RLEs to polys/uncompressed RLEs?

@realwecan after decoding RLE with pycocotools.mask.decode, you can check my implementation to generate polygons with opencv:

coco-json-converter

@hazirbas: thanks for your code. Why not use Davis 2017 that contains instance segmenyation? Could we use your code to convert Davis 2017 to coco format to use this maskrcnn implementation?

@John1231983 you need to modify the script accordingly for reading split files as well as db_info.yml file. For my own research, I needed it for DAVIS 2016.

Another solution to generating polygons, but using skimage instead of opencv.

import json
import numpy as np
from pycocotools import mask
from skimage import measure

ground_truth_binary_mask = np.array([[  0,   0,   0,   0,   0,   0,   0,   0,   0,   0],
                                     [  0,   0,   0,   0,   0,   0,   0,   0,   0,   0],
                                     [  0,   0,   0,   0,   0,   1,   1,   1,   0,   0],
                                     [  0,   0,   0,   0,   0,   1,   1,   1,   0,   0],
                                     [  0,   0,   0,   0,   0,   1,   1,   1,   0,   0],
                                     [  0,   0,   0,   0,   0,   1,   1,   1,   0,   0],
                                     [  1,   0,   0,   0,   0,   0,   0,   0,   0,   0],
                                     [  0,   0,   0,   0,   0,   0,   0,   0,   0,   0],
                                     [  0,   0,   0,   0,   0,   0,   0,   0,   0,   0]], dtype=np.uint8)

fortran_ground_truth_binary_mask = np.asfortranarray(ground_truth_binary_mask)
encoded_ground_truth = mask.encode(fortran_ground_truth_binary_mask)
ground_truth_area = mask.area(encoded_ground_truth)
ground_truth_bounding_box = mask.toBbox(encoded_ground_truth)
contours = measure.find_contours(ground_truth_binary_mask, 0.5)

annotation = {
        "segmentation": [],
        "area": ground_truth_area.tolist(),
        "iscrowd": 0,
        "image_id": 123,
        "bbox": ground_truth_bounding_box.tolist(),
        "category_id": 1,
        "id": 1
    }

for contour in contours:
    contour = np.flip(contour, axis=1)
    segmentation = contour.ravel().tolist()
    annotation["segmentation"].append(segmentation)

print(json.dumps(annotation, indent=4))

how would you convert a binary mask or encoded RLE into uncompressed RLE for use in the "iscrowd: 1" "counts" field?

@waspinator Segmentations with your code using skimage are differenct from the code of @Sundrops using cv2.
Are these two results both correct and can these two results both be used by Detectron? Please give me some advice, thank you. @waspinator @Sundrops

Results using your code with skimage:

{"segmentation": [[0.0, 252.00196078431372, 1.0, 252.00196078431372, 2.0, 252.00196078431372, 3.0, 252.00196078431372, 4.0, 252.00196078431372, 5.0, 252.00196078431372, 6.0, 252.00196078431372, 7.0, 252.00196078431372, 8.0, 252.00196078431372, 9.0, 252.00196078431372, 10.0, 252.00196078431372, 11.0, 252.00196078431372, 12.0, 252.00196078431372, 13.0, 252.00196078431372, 14.0, 252.00196078431372, 15.0, 252.00196078431372, 16.0, 252.00196078431372, 17.0, 252.00196078431372, 18.0, 252.00196078431372, 19.0, 252.00196078431372, 20.0, 252.00196078431372, 21.0, 252.00196078431372, 22.0, 252.00196078431372, 23.0, 252.00196078431372, 24.0, 252.00196078431372, 25.0, 252.00196078431372, 26.0, 252.00196078431372, 27.0, 252.00196078431372, 28.0, 252.00196078431372, 29.0, 252.00196078431372, 30.0, 252.00196078431372, 31.0, 252.00196078431372, 32.0, 252.00196078431372, 33.0, 252.00196078431372, 34.0, 252.00196078431372, 35.0, 252.00196078431372, 36.0, 252.00196078431372, 37.0, 252.00196078431372, 38.0, 252.00196078431372, 39.0, 252.00196078431372, 40.0, 252.00196078431372, 41.0, 252.00196078431372, 42.0, 252.00196078431372, 43.0, 252.00196078431372, 44.0, 252.00196078431372, 45.0, 252.00196078431372, 46.0, 252.00196078431372, 47.0, 252.00196078431372, 48.0, 252.00196078431372, 49.0, 252.00196078431372, 50.0, 252.00196078431372, 51.0, 252.00196078431372, 52.0, 252.00196078431372, 53.0, 252.00196078431372, 54.0, 252.00196078431372, 55.0, 252.00196078431372, 56.0, 252.00196078431372, 57.0, 252.00196078431372, 58.0, 252.00196078431372, 59.0, 252.00196078431372, 60.0, 252.00196078431372, 61.0, 252.00196078431372, 62.0, 252.00196078431372, 63.0, 252.00196078431372, 64.0, 252.00196078431372, 65.0, 252.00196078431372, 66.0, 252.00196078431372, 67.0, 252.00196078431372, 68.0, 252.00196078431372, 69.0, 252.00196078431372, 70.0, 252.00196078431372, 71.0, 252.00196078431372, 72.0, 252.00196078431372, 73.0, 252.00196078431372, 74.0, 252.00196078431372, 75.0, 252.00196078431372, 76.0, 252.00196078431372, 77.0, 252.00196078431372, 78.0, 252.00196078431372, 79.0, 252.00196078431372, 80.0, 252.00196078431372, 81.0, 252.00196078431372, 82.0, 252.00196078431372, 83.0, 252.00196078431372, 84.0, 252.00196078431372, 85.0, 252.00196078431372, 86.0, 252.00196078431372, 87.0, 252.00196078431372, 88.0, 252.00196078431372, 89.0, 252.00196078431372, 90.0, 252.00196078431372, 91.0, 252.00196078431372, 92.0, 252.00196078431372, 93.0, 252.00196078431372, 93.00196078431372, 252.0, 94.0, 251.00196078431372, 95.0, 251.00196078431372, 96.0...

Results with @Sundrops code using cv2:

[94, 252, 93, 253, 0, 253, 0, 286, 188, 286, 188, 269, 187, 268, 187, 252]

@Kongsea I haven't tested @Sundrops cv2 implementation, but the basic idea should be the same. They will produce different results since there are an infinite amount of sets of points you can use to describe a shape. But otherwise they should both work. I just didn't have cv2 installed so I wrote something that doesn't require it.

@Kongsea @waspinator I have tested my code. It works.

Thank you @Sundrops @waspinator .
I will have a try.

@waspinator Is any way to convert segmentation poly vertex to RLE? My target is made iscrowd=1

@Sundrops Why did you comment this part in your code?

# if len(contour) > 4:
    #     segmentation.append(contour)
# if len(segmentation) == 0:
#     continue

We indeed need to handle such a case, right?

@Yuliang-Zou You should uncomment this part when contours are uesd for Detectron. Beacase the Detectron will treat it as a rectangle when len(contour) ==4. I have updated my previous code.

@Sundrops Thanks. But we still need to handle len(contour)==2, right?

@Yuliang-Zou Yes, but the code if len(contour) > 4: has handle len(contour)==2 and len(contour)==4.

@Sundrops I see, thank you!

I wrote a library and article to help with creating COCO style datasets.

https://patrickwasp.com/create-your-own-coco-style-dataset/

@Sundrops & @topcomma I have problem to load the annotated data in pycocotols as my annotation is included segmentation without mask. any idea how to visualize the annotation without mask in pycocotools?

@Sundrops
ann: {
"segmentation": [
[312.29, 562.89, 402.25, 511.49, 400.96, 425.38, 398.39, 372.69, 388.11, 332.85, 318.71, 325.14, 295.58, 305.86, 269.88, 314.86, 258.31, 337.99, 217.19, 321.29, 182.49, 343.13, 141.37, 348.27, 132.37, 358.55, 159.36, 377.83, 116.95, 421.53, 167.07, 499.92, 232.61, 560.32, 300.72, 571.89]
],
"area": 54652.9556,
"iscrowd": 0,
"image_id": 480023,
"bbox": [116.95, 305.86, 285.3, 266.03],
"category_id": 58,
"id": 86
}
How can I calculate the area with mask.py in coco-api ? Thank you.
My code is as following but error:

segmentation = ann['segmentation']
bimask = np.array(segmentation, dtype = np.uint8, order = 'F')
print("bimask:", bimask)
rleObjs = mask.encode(bimask)
print("rleObjs:", rleObjs)
area = mask.area(rleObjs)
print("area:", area)

@manketon
Maybe you can try cv2, but I'm not sure it's right. It's just a example from Removing contours from an image using Python and OpenCV.

def is_contour_bad(c):
    # approximate the contour
    peri = cv2.arcLength(c, True)
    approx = cv2.approxPolyDP(c, 0.02 * peri, True)
    # return True if it is not a rectangle
    return not len(approx) == 4
image = cv2.imread('xx.jpg')
contours = ann['segmentation']
mask = np.ones(image.shape[:2], dtype="uint8") * 255
# loop over the contours
for c in contours:
    # if the contour is not a rectangle, draw it on the mask
    if is_contour_bad(c):
        cv2.drawContours(mask, [c], -1, 0, -1)
area = (mask==0).sum()

@Sundrops @waspinator I have a question. If my original object mask has big holes, when it is converted to polygons, how should I correctly convert it back? The decode and merge functions in coco API would treat the hole as parts of the object, so when converted back the holes become masks. How should I do in this case?

@wangg12 As far as I know COCO doesn't have a native way of encoding holes.

for contour in contours:
        contour = contour.flatten().tolist()
        segmentation.append(contour)
        if len(contour) > 4:
            segmentation.append(contour)

Hi @Sundrops , sincerely thanks for your codes. I'm a beginner on Detectron. I feel confused why the length of contour (which is a list) should be larger than 4, in other words, what will happen for the model when the length is smaller than 4. One of your reply said that Detectron will treat it as a rectangle. In my perspective, there may be some objects with the shape of rectangle, so I think it's fine. Also, I wonder if you append the contour for twice in one iteration. I think the right code should be.

for contour in contours:
        contour = contour.flatten().tolist()
        if len(contour) > 4:
            segmentation.append(contour)

It' will be highly appreciated if you can give me some suggestions. Thank you so much!

@BobZhangHT Yes, it should append the contour for once in one iteration.
For your first question, if len(ann['segmentation'][0])==4, cocoapi will assume all are rectangle.

# cocoapi/PythonAPI/pycocotools/coco.py
def annToRLE(self, ann):
    t = self.imgs[ann['image_id']]
    h, w = t['height'], t['width']
    segm = ann['segmentation']
    if type(segm) == list:
        rles = maskUtils.frPyObjects(segm, h, w) 
        rle = maskUtils.merge(rles)
   ......
# cocoapi/PythonAPI/pycocotools/_mask.pyx
def frPyObjects(pyobj, h, w):
    # encode rle from a list of python objects
    if type(pyobj) == np.ndarray:
        objs = frBbox(pyobj, h, w)
    elif type(pyobj) == list and len(pyobj[0]) == 4:
        objs = frBbox(pyobj, h, w)
    elif type(pyobj) == list and len(pyobj[0]) > 4:
        objs = frPoly(pyobj, h, w)
   ......

@Sundrops Thanks for your reply!

I interpret poly as list of polygons, defined by the vertexes, like [[x1,y1,x2,y2…xN,yN],…[x1,y1,x2,y2…xN,yN] ], where the coordinates are of the same scale as the image. The masks, encoded this way, are shown correctly by CocoAPI [1] But you may want to get “official” answer. [1] https://github.com/cocodataset/cocoapi https://github.com/cocodataset/cocoapi

On 2. Feb 2018, at 09:18, Hu Xingui @.*> wrote: @amokeev https://github.com/amokeev , Are you sure the number in segmentation is (x,y) vertexes? For example "66916" ,a large number! Another, although I set "iscrowd" as "1" for RLE format, yet could not run on Detectron. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#100 (comment)>, or mute the thread https://github.com/notifications/unsubscribe-auth/AFlh63ObnXg-DcaDKwIi3pB4Ppig464Hks5tQsTkgaJpZM4R2tN3.

How can I transform RLE to polygons when "iscrowd" is "1" because the matterport/MaskRCNN works only with polygons. @amokeev. I want to use the matterport implementation of maskRCNN with a coco dataset created using pycococreator.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

pacelu picture pacelu  ·  3Comments

olgaliak picture olgaliak  ·  4Comments

coldgemini picture coldgemini  ·  3Comments

fangpengcheng95 picture fangpengcheng95  ·  4Comments

743341 picture 743341  ·  4Comments