Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

Python + Image Processing: Efficiently Assign Pixel Values to Nearest Predefined Value

I implemented an algorithm that uses opencv kmeans to quantize the unique brightness values present in a greyscale image. Quantizing the unique values helped avoid biases towards image backgrounds which are typically all the same value.

However, I struggled to find a way to utilize this data to quantize a given input image.

I implemented a very naive solution, but it is unusably slow for the required input sizes (4000×4000):

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

for x in range(W):
    for y in range(H):
        center_id = np.argmin([(arr[y,x]-center)**2 for center in centers])
        ret_labels2D[y,x] = sortorder.index(center_id)
        ret_qimg[y,x] = centers[center_id]

Basically, I am simply adjusting each pixel to the predefined level with the minimum squared error.

Is there any way to do this faster? I was trying to process an image of size 4000×4000 and this implementation was completely unusable.

Full code:

def unique_quantize(arr, K, eps = 0.05, max_iter = 100, max_tries = 20):

    """@param arr: 2D numpy array of floats"""

    H, W = arr.shape

    unique_values = np.squeeze(np.unique(arr.copy()))

    unique_values = np.array(unique_values, float)

    if unique_values.ndim == 0:
        unique_values = np.array([unique_values],float)

    unique_values = np.ravel(unique_values)

    unique_values = np.expand_dims(unique_values,1)

    Z = unique_values.astype(np.float32)

    criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER,max_iter,eps)

    compactness, labels, centers = cv2.kmeans(Z,K,None,criteria,max_tries,cv2.KMEANS_RANDOM_CENTERS)

    labels = np.ravel(np.squeeze(labels))
    centers = np.ravel(np.squeeze(centers))

    sortorder = list(np.argsort(centers)) # old index --> index to sortorder

    ret_center = centers[sortorder]
    ret_labels2D = np.zeros((H,W),int)
    ret_qimg = np.zeros((H,W),float)

    for x in range(W):
        for y in range(H):
            center_id = np.argmin([(arr[y,x]-center)**2 for center in centers])
            ret_labels2D[y,x] = sortorder.index(center_id)
            ret_qimg[y,x] = centers[center_id]

    return ret_center, ret_labels2D, ret_qimg

>Solution :

As your image is grayscale (presumably 8 bits), a lookup-table will be an efficient solution. It suffices to map all 256 gray-levels to the nearest center once for all, then use this as a conversion table. Even a 16 bits range (65536 entries) would be significantly accelerated.

Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading