Different results using parallelism despite there being no data dependency/data race?

I have a piece of code in C which generates an int matrix and assigns 0 to every field. Afterwards, when I run this:

#pragma omp parallel for
    for (i = 0; i < 100; i++)
        for (j = 0; j < 100; j++)
            a[i][j] = a[i][j] + 1

without OpenMP, I get, as expected, 1s in every field.

But when I run it in parallel, I get plotches of random values (0s and sometimes even 2) every once in a while, despite (what I think is) a piece of code with no data dependency. Everytime it’s ran, it produces a different result with different plotches of messy values. Am I missing something? I made sure that it’s the same code by simply writing it in serial first, then copying it over and just adding the extra lines making it parallel. Thanks in advance!

enter image description here

>Solution :

Your ì and j variables aren’t declared inside the parallel pragma.

According to http://supercomputingblog.com/openmp/tutorial-parallel-for-loops-with-openmp/ this can cause the j variable to be shared across all parallel threads, meaning it gets incremented too many times and rows get skipped (causing 0’s).

I suspect with the right ordering this also causes increments to be lost (causing 2’s, 3’s and 4’s), but I’m not sure what order that is off the top of my head.

Leave a Reply