Program of the Week 02 - Unique Element

So initially I was planning to do a program on basic object / motion detection from video using opencv. I tried to find my old code for it but was unable to find it. So I thought rather than wasting time finding it, I'd rather find it myself. But since you are reading this long introduction and you probably have read the title by now, that ain't happening; this week at least. So what alternative I have then today, pretty sure there are a ton of programs for finding unique elements from a list or what not, and this one will probably join its ranks as well.

A little bit of backstory for the problem / program at hand is that last week one of my friends went for an interview for a new job. He'd cleared many rounds before and was now in the technical round. The problem given to him was simple:

"Given a list of elements, you need to find the non-repeating element from the list. The space complexity needs to be O(k) and the time complexity O(n)." Now, I've been out of the space and time complexity game for quite some time, but when it comes to finding unique element I know a bunch of libraries in python which can do that for me. So I came up with various solutions to a relatively simple problem. Here are some:

Using Numpy:
This solution got messy really quick, but it does give us a start. As a couple of friends described it "You are using a grenade to kill a mosquito." 

# Initialization Part
import numpy as np

a = [1,2,1,2,3,4,4,5,6,6,5]

# Using numpy.unique() we'll find the elements and their frequency distribution aka counts
nums, counts = np.unique(a,return_counts=True)

# The minimum a number will repeat will be 1. It was given in the problem statement that the number will be only a single number that doesn't repeat. Else this method returns all numbers that don't repeat
# This gives us the position of all elements that do not repeat
pos = np.where(counts == 1)

# With the position(s) we can now print out the non repeating numbers
print(nums[pos])
# [3]
    

Using Collections:
I generally use collections to analyze training results, basic data overview and rest when I do ML work. I don't use it very frequently mainly because sklearn.metrics is much more robust and convenient. But there are often times when I want something smaller. This was the approach I came up with next which looked logical. I'm not sure if it follows the space complexity constraint, but it does follow the time complexity, plus no 'for loop'.
# Initialization Part
import collections

a = [1,2,1,2,3,4,4,5,6,6,5]

y = Counter(a)
# This gives a counter object like this
# Counter({1: 2, 2: 2, 3: 1, 4: 2, 5: 2, 6: 2})

# Doesn't this look like a dictionary, we can now use a for loop and find the non repeating element. But wait, collections has other things too.

y.most_common()
# So this gives us a very convenient list like this
# [(1, 2), (2, 2), (4, 2), (5, 2), (6, 2), (3, 1)]

# The last element here is the non repeating one, which we will access like this

y.most_common()[-1][0]
# 3

# The whole thing can be written in one line like this:

collections.Counter(a).most_common()[-1][0]
# 3

Using JavaScript:
That was the solution I gave to my friend and he was impressed I guess. Later he shared his solution written in JavaScript. It was a good one. Here is the code snippet.
let a = [1,2,3,4,3,1,2];
let b = {};

for(let i = 0; i < a.length; i++){
    let  num = a[i].toString();
    if(b.hasOwnProperty(num))
        delete b[num];
    else
        b[num] = a[i];
}
console.log(Object.keys(b)[0]);
// 4

If you have any suggestions or ideas for other programming problems, please do post them in the comments. That's all for week one of program of the week. Two down fifty to go. :)

Comments

Post a Comment

Popular posts from this blog

My Experiments with Pi (Part 2 of N)

My Experiments with Pi (Part 1 of N)

Certified