您的位置:首页 > 产品设计 > UI/UE

K most frequent words from a file

2016-06-18 15:50 579 查看



The
Most Efficient Way To Find Top K Frequent Words In A Big Word Sequence

up
vote46down
votefavorite
34

Input: A positive integer K and a big text. The text can actually be viewed as word sequence. So we don't have to worry about how to break down it into word sequence.

Output: The most frequent K words in the text.

My thinking is like this. 

use a Hash table to record all words' frequency while traverse the whole word sequence. In this phase, the key is "word" and the value is "word-frequency". This takes O(n) time. 

sort the (word, word-frequency) pair; and the key is "word-frequency". This takes O(n*lg(n)) time with normal sorting algorithm. 

After sorting, we just take the first K words. This takes O(K) time. 

To summarize, the total time is O(n+nlg(n)+K), Since K is surely smaller than N, so it is actually O(nlg(n)).

We can improve this. Actually, we just want top K words. Other words' frequency is not concern for us. So, we can use "partial Heap sorting". For step 2) and 3), we don't just do sorting. Instead, we change it to be

2') build a heap of (word, word-frequency) pair with "word-frequency" as key. It takes O(n) time to build a heap;

3') extract top K words from the heap. Each extraction is O(lg(n)). So, total time is O(k*lg(n)).

To summarize, this solution cost time O(n+k*lg(n)).

This is just my thought. I haven't find out way to improve step 1).

I Hope some Information Retrieval experts can shed more light on this question.

algorithm word-frequency
shareimprove
this question
edited Mar
15 '15 at 13:45





Deduplicator
27.2k63162

asked Oct 9 '08 at 2:24





Morgan Cheng
20.9k40138202

 
 
Would you use merge sort or quicksort for the O(n*logn) sort? – committedandroider Feb
27 '15 at 18:15
1 
For practical uses, Aaron
Maenpaa's answer of counting on a sample is best. It's not like the most frequentwords will hide from your sample. For you complexity geeks, it's O(1) since the size of the sample is fixed. You don't
get the exact counts, but you're not asking for them either. – Nikana
Reklawyks May
5 '15 at 22:00
 
If what you want is a review of your complexity analysis, then I'd better mention: if n is
the number of words in your text and m is the number of different words (types, we call them), step 1 is O(n),
but step 2 is O(m.lg(m)), and m << n (you
may have billions words and not reach a million types, try it out). So even with a dummy algorithm, it's still O(n + m lg(m))
= O(n). – Nikana
Reklawyks May
5 '15 at 22:40 
add
a comment


16 Answers

activeoldestvotes

up vote29down
vote
This can be done in O(n) time

Solution 1:

Steps:

Count words and hash it, which will end up in the structure like this
var hash = {
"I" : 13,
"like" : 3,
"meow" : 3,
"geek" : 3,
"burger" : 2,
"cat" : 1,
"foo" : 100,
...
...


Traverse through the hash and find the most frequently used word (in this case "foo" 100), then create the array of that size

Then we can traverse the hash again and use the number of occurrences of words as array index, if there is nothing in the index, create an array else append it in the array. Then we end up with an array like:
0   1      2            3                100
[[ ],[ ],[burger],[like, meow, geek],[]...[foo]]


Then just traverse the array from the end, and collect the k words.

Solution 2:

Steps:
Same as above
Use min heap and keep the size of min heap to k, and for each word in the hash we compare the occurrences of words with the min, 1) if it's greater than the min value, remove the
min (if the size of the min heap is equal to k) and insert the number in the min heap. 2) rest simple conditions.
After traversing through the array, we just convert the min heap to array and return the array.

shareimprove
this answer
edited Aug
11 '14 at 1:21





Peter O.
15.4k74760

answered Mar 12 '14 at 3:51





Chihung Yu
64469

 
7 
Your solution (1) is an O(n) bucket sort replacing a standard O(n lg n) comparison sort. Your approach requires additional
space for the bucket structure, but comparison sorts can be done in place. Your solution (2) runs in time O(n lg k) -- that is, O(n) to iterate over all words and O(lg k) to add each one into the heap. – stackoverflowuser2010 Sep
29 '14 at 4:29
3 
The first solution does require more space, but it is important to emphasize that it is in fact O(n) in time. 1: Hash
frequencies keyed by word, O(n); 2: Traverse frequency hash, create second hash keyed by frequency. This is O(n) to traverse the hash and O(1) to add a word to the list of words at that frequency. 3: Traverse hash down from max frequency until you hit k. At
most, O(n). Total = 3 * O(n) = O(n). – BringMyCakeBackNov
5 '14 at 1:01 
1 
Typically when counting words, your number of buckets in solution 1 is widely overestimated (because the number one
most frequent word is so much more frequent than the second and third best), so your array is sparse and inefficient. – Nikana
Reklawyks May
5 '15 at 22:03
add
a comment


up vote15down
vote
You're not going to get generally better runtime than the solution you've described. You have to do at least O(n) work to evaluate all the words, and then O(k) extra work to find the top k terms.

If your problem set is really big, you can use a distributed solution such as map/reduce. Have n map workers count frequencies on 1/nth of the text each, and for each word, send it to one of m reducer
workers calculated based on the hash of the word. The reducers then sum the counts. Merge sort over the reducers' outputs will give you the most popular words in order of popularity.

shareimprove
this answer
answered Oct 9 '08 at 7:55





Nick Johnson
88.4k14104176

 add
a comment
up vote9down
vote
A small variation on your solution yields an O(n) algorithm if we don't care about ranking the top K, and a O(n+k*lg(k)) solution if we do. I believe
both of these bounds are optimal within a constant factor.

The optimization here comes again after we run through the list, inserting into the hash table. We can use the median
of medians algorithm to select the Kth largest element in the list. This algorithm is provably O(n).

After selecting the Kth smallest element, we partition the list around that element just as in quicksort. This is obviously also O(n). Anything on the "left" side of the pivot is in our group of K elements, so we're done (we can simply throw away everything
else as we go along).

So this strategy is:
Go through each word and insert it into a hash table: O(n)
Select the Kth smallest element: O(n)
Partition around that element: O(n)

If you want to rank the K elements, simply sort them with any efficient comparison sort in O(k * lg(k)) time, yielding a total run time of O(n+k * lg(k)).

The O(n) time bound is optimal within a constant factor because we must examine each word at least once. 

The O(n + k * lg(k)) time bound is also optimal because there is no comparison-based way to sort k elements in less than k * lg(k) time. 

shareimprove
this answer
answered Dec 26 '08 at 0:54

Andrew

 
 
When we select the Kth smallest element, what gets selected is the Kth smallest hash-key. It is not necessary that there
are exactly K words in the left partition of Step 3. – Prakash
Murali May
20 '12 at 15:10 
2 
You will not be able to run "medians of medians" on the hash table as it does swaps. You would have to copy the data
from the hash table to an temp array. So, O(n) storage will be reqd. – user674669 Feb
20 '13 at 17:08 
 
I don't understand how can you select the Kth smallest element in O(n) ? – Michael
Ho Chum Mar
16 '15 at 16:18
 
Check this out for the algorithm for finding Kth smallest element in O(n) - wikiwand.com/en/Median_of_medians – Piyush Feb
2 at 14:19
add
a comment
up vote8down
vote
If your "big word list" is big enough, you can simply sample and get estimates. Otherwise, I like hash aggregation.

Edit:

By sample I mean choose some subset of pages and calculate the most frequent word in those pages. Provided you select the pages in a reasonable way and select a statistically significant sample, your estimates of the most frequent words should be reasonable.

This approach is really only reasonable if you have so much data that processing it all is just kind of silly. If you only have a few megs, you should be able to tear through the data and calculate an exact answer without breaking a sweat rather than bothering
to calculate an estimate.

shareimprove
this answer
edited Jul
7 '09 at 11:56

answered Oct 9 '08 at 2:26





Aaron Maenpaa
50.7k87699

 
 
Sometimes you have to do this many times over, for example if you're trying to get the list of frequent words per website,
or per subject. In that case, "without breaking a sweat" doesn't really cut it. You still need to find a way to do it in as efficiently as possible. – itsadok Sep
16 '09 at 8:31
 
+1 for a practical answer that doesn't adress the irrelevant complexity issues. @itsadok: For each run: if it's big
enough, sample it ; if it's not, then gaining a log factor is irrelevant. – Nikana
Reklawyks May
5 '15 at 22:09
add
a comment
up vote2down
vote
You can cut down the time further by partitioning using the first letter of words, then partitioning the largest multi-word set using the next character until you have k single-word sets. You would use a sortof 256-way tree with lists of partial/complete words
at the leafs. You would need to be very careful to not cause string copies everywhere.

This algorithm is O(m), where m is the number of characters. It avoids that dependence on k, which is very nice for large k [by the way your posted running time is wrong, it should be O(n*lg(k)), and I'm not sure what that is in terms of m].

If you run both algorithms side by side you will get what I'm pretty sure is an asymptotically optimal O(min(m, n*lg(k))) algorithm, but mine should be faster on average because it doesn't involve hashing or sorting.

shareimprove
this answer
answered Oct 9 '08 at 3:22

Strilanc

 
7 
What you're describing is called a 'trie'. – Nick
Johnson Oct
9 '08 at 7:53
 
Hi Strilanc. Can you explain the process of partition in details? – Morgan
Cheng Oct
9 '08 at 11:39
1 
how does this not involve sorting?? once you have the trie, how do you pluck out the k words with the largest frequencies.
doesnt make any sense – ordinary Nov
12 '13 at 7:38 
add
a comment
up vote2down
vote
You have a bug in your description: Counting takes O(n) time, but sorting takes O(m*lg(m)), where m is the number of unique words. This is usually much smaller than the total number of words, so probably
should just optimize how the hash is built.

shareimprove
this answer
answered Dec 26 '08 at 0:33





martinus
8,99894773

 add
a comment
up vote1down
vote
Your problem is same as this- http://www.geeksforgeeks.org/find-the-k-most-frequent-words-from-a-file/

Use Trie and min heap to efficieinty solve it.

shareimprove
this answer
answered Jan 25 '15 at 17:35





Jitendra Rathor
836

 add
a comment
up vote1down
vote
If what you're after is the list of k most frequent words in your text for any practical k and for any natural langage, then the complexity of
your algorithm is not relevant. 

Just sample, say, a few million words from your text, process that with any algorithm in a matter of seconds, and the most frequent counts will
be very accurate.

As a side note, the complexity of the dummy algorithm (1. count all 2. sort the counts 3. take the best) is O(n+m*log(m)), where m is the number of different words in your text. log(m) is much smaller than (n/m), so it remains O(n). 

Practically, the long step is counting.




Find the k most frequent words from a file

Given a book of words. Assume you have enough main memory to accommodate all words. design a data structure to find top K maximum occurring words. The data structure should be dynamic so that new words can be added. 

A simple solution is to use Hashing. Hash all words one by one in a hash table. If a word is already present, then increment its count. Finally, traverse through the hash table
and return the k words with maximum counts.

We can use Trie and Min Heap to get the k most frequent words efficiently. The idea is to use Trie for searching existing words adding new words efficiently. Trie also stores
count of occurrences of words. A Min Heap of size k is used to keep track of k most frequent words at any point of time(Use of Min Heap is same as we used it to find k largest elements in this post).

Trie and Min Heap are linked with each other by storing an additional field in Trie ‘indexMinHeap’ and a pointer ‘trNode’ in Min Heap. The value of ‘indexMinHeap’ is maintained as -1 for the words which are currently not in Min Heap (or currently not among
the top k frequent words). For the words which are present in Min Heap, ‘indexMinHeap’ contains, index of the word in Min Heap. The pointer ‘trNode’ in Min Heap points to the leaf node corresponding to the word in Trie.

Following is the complete process to print k most frequent words from a file.

Read all words one by one. For every word, insert it into Trie. Increase the counter of the word, if already exists. Now, we need to insert this word in min heap also. For insertion in min heap, 3 cases arise:

1. The word is already present. We just increase the corresponding frequency value in min heap and call minHeapify() for the index obtained by “indexMinHeap” field in Trie. When
the min heap nodes are being swapped, we change the corresponding minHeapIndex in the Trie. Remember each node of the min heap is also having pointer to Trie leaf node.

2. The minHeap is not full. we will insert the new word into min heap & update the root node in the min heap node & min heap index in Trie leaf node. Now, call buildMinHeap().

3. The min heap is full. Two sub-cases arise.

….3.1 The frequency of the new word inserted is less than the frequency of the word stored in the head of min heap. Do nothing.

….3.2 The frequency of the new word inserted is greater than the frequency of the word stored in the head of min heap. Replace & update the fields. Make sure to update the corresponding
min heap index of the “word to be replaced” in Trie with -1 as the word is no longer in min heap.

4. Finally, Min Heap will have the k most frequent words of all words present in given file. So we just need to print all words present in Min Heap.

Run on IDE

Output:
your : 3
well : 3
and : 4
to : 4
Geeks : 6


The above output is for a file with following content.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: