Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

TwoSum Algorithm (Python): Index of dupluicate elements treated as same

I am trying to solve the twoSum LeetCode problem via a brute force algorithm. Essentially the question is to find the indexes of two numbers that equal to the target. So if I had a list [7,2,4,1] and my target was 9, I would return index of 7 & 9 (which in this ex is [0,1]) as the solution.

I have written this algorithm here:

class Solution:
    def twoSum(self, nums: List[int], target: int) -> List[int]:

    if (len(nums) < 2):
        return False

    count = 0
    for i in range(len(nums)):
        for j in range (i+1, len(nums)):
            if nums[i] + nums[j] == target:
                ans =  str(nums.index(nums[j])) + str(nums.index(nums[i]))
                return ans
            count = count + 1

It works for everything except the one scenario where two elements in a list are duplicates. So if the list contained [3,3] with target 6, my algorithm is returning index [0,0] instead of [0,1]. What is causing this issue and what would be the proper solution to fixing it?

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

>Solution :

ans =  str(nums.index(nums[j])) + str(nums.index(nums[i]))

This happens because you try to obtain the index of 3 on this line twice, which will return index 0 twice. Why not return string of i and j itself since they are already indices?
So:

ans =  str(j) + str(i)
Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading