Tuesday, 10 May 2016

Towards Urdu Corpus: Mining Wikipedia Urdu using Wikiforia Parser

 

Corpus collection is the first step before you even think of Machine Learning and Linguistics. While there are some serious concerted efforts and progress made in different languages to compile and publish Languages Corpora, Urdu Language is no where to be seen in this context. Why? this warrants a separate detailed post, which I will inshaAllah write some time in near future.

In this post, I want to share my first hand experience of collecting a sizable raw Urdu Plain Text from Open Source Wikipedia, so not only I can use it in my research, but will also be able to publish it under Open Source License for others to benefit from it.

Wikipedia-logo-v2-urWikipedia publishes complete database backup dumps of

“all Wikimedia wikis, in the form of wikitext source and metadata embedded in XML. A number of raw database tables in SQL form are also available”. “These snapshots are provided at the very least monthly and usually twice a month.”

As I was interested in Wikipedia Urdu, I downloaded following two files from the Wikipedia Urdu Database Backup Dumps page:

Next task was to extract actual page content from these dump files which follows a specific schema. Wikipedia has a comprehensive list of open source parsers written in different programming languages and published under different types of open source licenses. Because my goal was to collect open source Urdu Plain “Text” and my programming language choice was “Java”, I opt for Wikiforia.

“Wikiforia is a library and a tool for parsing Wikipedia XML dumps and converting them into plain text for other tools to use.”

So, I began by cloning the Wikiforia github repository locally on my laptop and then ran following command on the terminal:

java -jar wikiforia-1.2.1.jar 
     -pages urwiki-20160501-pages-articles-multistream.xml.bz2 
     -output output.xml

The program worked perfectly and uses concurrency (one thread per logical cores) to speed up the processing. It took few minutes to complete the task, however the output was not pure plain text, in fact it was a simplified form of XML and looks like:

xml-output

At that time I had two options:

  1. Run another tool to convert this output file from XML to Plain Text, or
  2. Add custom implementation to Wikiforia to make it output pure Plain Text

I opt for second one for two reasons, first I don’t want to waste additional time and processor cycles to process the generated output once again, secondly I thought that there might be others who will benefit from this modification.

So I forked Wikiforia and add a new Sink Implementation  PlainTextWikipediaPageWriter.java. I also had to modify the main program “App.java”, to add CLI support for additional switch “outputformat” with a sensible default set to “xml”, with only two possible values (for now) “xml” and “plain-text”. And once I did that, I also submitted the “Pull Request” on the Wikiforia github repository, in case they decided to merge the patch on to the original repository.

Then, I ran following modified command to extract the “Plain Text” out from the Wikipedia Urdu Database Dumps:

java -jar wikiforia-1.2.1.jar 
     -pages urwiki-20160501-pages-articles-multistream.xml.bz2 
     -output output.txt
     -outputformat plain-text

And here’s how plain text output.txt looks like:

plain-text-output

Finally alhamdolillah! I made my first contribution to the Urdu Corpus Community Project.

Tuesday, 12 April 2016

Java’s utilization of Multiple CPU Cores for Parallelism or Concurrency

 

While verifying the utilization of multiple CPU Cores in Java for Parallel or Concurrent or Multi Threading programming, I came across interesting numbers. I wrote a simple program which tries to compute 40,000,000 random integer numbers first using a single thread and then again using maximum threads, one per available CPU Cores.

In order to find available CPU Cores on a system, Java exposes a method in java.lang.Runtime:

public int availableProcessors()

Returns the number of processors available to the Java virtual machine.

This value may change during a particular invocation of the virtual machine. Applications that are sensitive to the number of available processors should therefore occasionally poll this property and adjust their resource usage appropriately.

Returns:
the maximum number of processors available to the virtual machine; never smaller than one
Since:
1.4

When I run the program to print out the number of available CPU Cores, I was surprised that it printed “4” instead of “2” because I have a Duo Core Laptop:

duo core

To further verify that, I open up the Task Manager and found this:

task manager

It turns out that it’s “4” because of Hyper-Threading:

“For each processor core that is physically present, the operating system addresses two virtual or logical cores, and shares the workload between them when possible.”

So, finally I ran my single threaded program and observe this:

why all processors were busy for a single thread

Why all the four logical processors were busy in running a single threaded program? shouldn’t that be just one of them?

To dig deeper, I changed my program to run in 4 parallel threads and the result was:

multi threading execution on 4 logical processors

That wasn’t making any sense, clearly both single threaded and multi threaded versions of the program were using all the available logical processors for processing. Searching the internet for clarification reveals that:

“The OS is responsible for scheduling. It is free to stop a thread and start it again on another CPU. It will do this even if there is nothing else the machine is doing.

The process is moved around the CPUs because the OS doesn't assume there is any reason to continue running the thread on the same CPU each time.”

And there comes the concept of CPU or Processor Affinity:

The processor affinity is simply a number that every process is associated with. It serves as a bit array that determines on which CPUs in a system the threads of a particular process are allowed to run. For instance a processor affinity of 2 means that the process can only run on CPU 1, because only the bit at index 1 is set (if the processor affinity is regarded as a bit array with indexing starting at the rightmost bit with zero). A processor affinity of 1 means, that the process, or better yet, the threads of that process, can only run on CPU 0. A processor affinity of 3 means that the process may run on both CPUs 0 and 1. A processor affinity of 0 means that there is no CPU that this process may run on, and is therefore not possible. The processor affinity is normally inherited from the parent process that starts a particular process, but it can also be changed at runtime from another process.

While there are several ways to test the Processor Affinity, the one that I found easy and quick to use was ProcAff. After running the same single threaded version of the program with procaffx64.exe:

procaffx64 command

I observe this:

single thread with processor affinity

That’s how the execution of a single threaded program should look like; utilizes only one logical processor for its execution.

Furthermore it is also quite interesting that the execution time of the following matches (please refer to the Microsoft Excel Sheet “analysis.xlsx” uploaded on GitHub repository along with the code):

Average Time to run a single Thread with no CPU/Process Affinity == Average Time to run a single Thread with CPU/Process Affinity

However, the Task Manager shows visually that former case uses all 4 logical processors while the later case uses only one logical process, but they both end up finishing up their task at almost exactly the same time.

Sunday, 10 April 2016

Illustration of Boolean Retrieval Tool in Java using OpenNLP, ANTLR v4 and XStream

 

An example tool to illustrate the concepts of Boolean Retrieval (http://www.cis.lmu.de/~hs/teach/14s/ir/) as taught to us by Dr Tafseer Ahmed Khan (http://cs.dsu.edu.pk/faculty/tafseer/).

I've used:

Apache OpenNLP (https://opennlp.apache.org/) - For Tokenization of documents and dictionary creation

ANTLR 4 - (http://www.antlr.org/) - For Boolean Retrieval Query Grammer, Lexer and Parser

XStream - (http://x-stream.github.io/) - For reading input config from XML configuration file