+1-316-444-1378

 This exercise is a simple extension of the word count demo: in the first part of the exercise, you’ll be counting bigrams, and in the second part of the exercise, you’ll be computing bigram relative frequencies. 

Part I: Count the bigrams Take the word count example and extend it to count bigrams. Bigrams are simply sequences of two consecutive words. For example, the previous sentence contains the following bigrams:  “Bigrams are”, “are simply”, “simply sequences”, “sequence of”, etc. Work with the sample collection Bible+Shakes.nouns on Blackboard. Don’t worry about doing anything fancy in terms of tokenization; it’s fine to continue using Java’s StringTokenizer. 

In order for this to run on a cluster, you will need to build a jar file. jar cf jar-file input-file(s) 

Questions to answer: 

1. How many unique bigrams are there? 

2. List the top ten most frequent bigrams and their counts. 

3. What fraction of all bigrams occurrences does the top ten bigrams account for? That is,  what is the cumulative frequency of the top ten bigrams? 

4. How many bigrams appear only once? 

Part II: From bigram counts to relative frequencies Extend your program to compute bigram relative frequencies, i.e., how likely you are to observe a word given the preceding word. The output of the code should be a table of values for F(Wn|Wn1). Hint: to compute F(B|A), count up the number of occurrences of the bigram “A B”, and then divide by the number of occurrences of all the bigrams that start with “A”. 

Questions to answer: 

1. What are the five most frequent words following the word “light”? What is the frequency of observing each word? 

2. Same question, except for the word “contain”. 

3. If there are a total of N words in your vocabulary, then there are a total of N2 possible values for F(Wn|Wn-1)in theory, every word can follow every other word (including itself). What fraction of these values are non-zero? In other words, what proportion of all possible events is actually observed? To give a concrete example, let’s say that following the word “happy”, you only observe 100 different words in the text collection.  Does this mean that N-100 words are never seen after “happy” (perhaps the distribution of happiness is quite limited? 

Categories: Uncategorized