In the end of September I discovered an amazing data set which is provided by Google! It is called the Google n gram data set. Even thogh the english wikipedia article about ngrams needs some clen up it explains nicely what an ngram is.
http://en.wikipedia.org/wiki/N-gram
The data set is available in several languages and I am sure it is very useful for many tasks in web retrieval, data mining, information retrieval and natural language processing.
This data set is very well described on the official google n gram page which I also include as an iframe directly here on my blog.
So let me rather talk about some possible applications with this source of pure gold:
I forwarded this data set to two high school students which I was teaching last summer at the dsa. Now they are working on a project for a German student competition. They are using the n-grams and neo4j to predict sentences and help people to improve typing.
The idea is that once a user has started to type a sentence statistics about the n-grams can be used to semantically and syntactically correctly predict what the next word will be and in this way increase the speed of typing by making suggestions to the user. This will be in particular usefull with all these mobile devices where typing is really annoying.
You can find some source code of the newer version at: https://github.com/renepickhardt/typology/tree/develop
Note that this is just a primitive algorithm to process the ngrams and store the information in a neo4j graph data base. Interestingly it can already produce decent recommendations and it uses less storage space than the ngrams dataset since the graph format is much more natural (and also due to the fact that we did not store all of the data saved in the ngrams to neo4j e.g. n-grams of different years have been aggregated.)
From what I know the roadmap is very clear now. Normalize the weights and for prediction use a weighed sum of all different kinds of n-grams and use machine learning (supervised learning) to learn those weights. As a training data set a corpus from different domains could be used (e.g. wikipedia corpus as a general purpose corpus or a corpus of a certain domain for a special porpus)
If you have any suggestions to the work the students did and their approach using graph data bases and neo4j to process and store ngrams as well as predicting sentences feel free to join the discussion right here!
Data Science, Data Analytics and Machine Learning Consulting in Koblenz Germany
Extract knowledge from your data and be ahead of your competition