How to stop artifacts occurring when zooming into a segmentation image with OpenCV

I have a bunch of segmentation images like this: Segmentation image of a human

I'm trying to scale these images by some ratio which is calculated elsewhere in the code.

However, using the following code leads to un-wanted artifacts in the image.

translation_matrix = np.float32([[space_to_fill, 0, 0], [0, space_to_fill, 0)

seg_img_translation = cv2.warpAffine(seg_img_translation, translation_matrix, (num_cols, num_rows), flags=cv2.INTER_AREA)

Zoomed in segmentation image of a human with artifacts

(the screenshot has been zoomed in to focus on a particular region where these artifacts are visible).

I've tried different flags to the warpAffine method, but no luck there.

By the way, these images are stored as 20-channel palettes instead of RGB. Thus the image dimensions are [H x W]. The entries of this matrix correspond to [0,1,...,19]. I can convert to RGB and back, but the end result must only contain the original palette values.

Any thoughts on how I can zoom in on these images without introducing the artifacts?


🔴 No definitive solution yet

📌 Solution 1

Using cv2.INTER_NEAREST instead of cv2.INTER_AREA as the flags parameter in cv2.warpAffine solved the problem.

Credit to Christoph Rackwitz for the answer.