I have found a lot of threads on removing duplicates in arrays but none for my specific use-case. I have a 2D matrix and I would like to remove duplicate occurrences within the same table.
I have this:
[['<>', 'a', 'b', 'b', 'b', 'b', 'a', 'a', '*'], [], [], []]
which I would like to transform into this:
[['<>', 'a', 'b', '*'], [], [], []]
Appreciate any help
>Solution :
Just do this,
arr = [['<>', 'a', 'b', 'b', 'b', 'b', 'a', 'a', '*'], [], [], []]
unique = list(map(set, arr))
unique = list(map(list, unique))
print(unique)
Output –
[['b', '*', 'a', '<>'], [], [], []]
The first line just converts the lists inside the variable arr into Python sets. Sets never allow duplicates in them so they are the best way to get all the unique values from any type of series.
Once I have a list of sets, I convert these sets back into lists using the second line.