Deep Learning Experiments on Google’s GPU Machines for Free

Update: If you are interested in getting a running start to machine learning and deep learning, I have created a course that I’m offering to my dedicated readers for just $9.99. Practical Deep Learning with Keras and Python .

So you’ve been working on Machine Learning and Deep Learning and have realized that it’s a slow process that requires a lot of compute power. Power that is not very affordable. Fear not! We have a way of using a playground for running our experiments on Google’s GPU machines for free. In this little how-to, I will share a link with you that you can copy to your Google Drive and use it to run your own experiments.

BTW, if you would like to receive updates when I post similar content, please signup below:

Signup for promotions, course coupons and new content notifications through a short form here.

Colaboratory

First, sign in to an account that has access to Google Drive (this would typically be any Google/Gmail account). Then, click on this link over here that has my playground document and follow the instructions below to get your own private copy.Read More »

Deep Learning for Protein Function Prediction

Protein function prediction is taking information about a protein (such as its amino acid sequence, 2D and 3D structure etc.) and trying to predict which functions it will exhibit. This has implications in several areas of bioinformatics and affects how drugs are created and diseases are studied. This is typically an intensive task requiring inputs from biologists and computer experts alike and annotating newly found proteins requires empirical as well as computational results.

We, here at FAST NU, recently came up with a unique method (dubbed DeepSeq — since it’s based on Deep Learning and works on protein sequences!) for predicting functions of proteins using only the amino acid sequences. This is the information that is the first bit we get when a new protein is found and is thus readily available. (Other pieces require a lot more effort.)

We have successfully applied DeepSeq to predict protein function from sequences alone without requiring any input from domain experts. The paper isn’t peer reviewed yet but we have made the paper available as preprint and our full code on github so you can review it yourself.

We believe DeepSeq is going to be a breakthrough inshaallah in the field of bioinformatics and how function prediction is done. Let’s see if I can come up with an update about this in a year after the paper has been read a few times by domain experts and we have a detailed peer review.

DeepSeq

Writing Better English — Avoid Very

I return with a minor post after another long break. This time, it’s about writing better English. Now, this isn’t humblebragging but I cannot be considered excellent at English writing — at least not by native standards. English is not my first language and I haven’t had much formal English education. I have, however, read a lot. Even if my English is not good, I can still point out some tips shared by experts.

Here’s the first one of those shared by Amanda Patterson on Writers Write. It’s a list of 45 words you can use to put emphasis on words without using the word “very”. I found it refreshingly helpful.

Bear in mind though that you cannot just go ahead and use a word without looking up its usage examples. Some words might have negative connotations even though the dictionary meanings look positive. For example, if you use the word ‘adequate‘ to describe someone’s work, they might be offended even though the dictionary meaning is that of acceptable quality.

p.s. After writing this, I searched for the word “very” and found two instances where I had used the word myself. I replaced it with better alternatives.

A Basic Naive Bayes classifier in Matlab

Update: If you are interested in getting a running start to machine learning and deep learning, I have created a course that I’m offering to my dedicated readers for just $9.99. Access it here on Udemy. If you are only here for Matlab, continue reading =]

This is the second in my series of implementing low-level machine learning algorithms in Matlab. We first did linear regression with gradient descent and now we’re working with the more popular naive bayes classifier. As is evident from the name, NB it is a classifier i.e. it sorts data points into classes based on some features. We’ll be writing code for NB using low-level matlab (meaning we won’t use matlab’s implementation of NB). Here’s the example we’ve taken (with a bit of modification) from here.

Consider the following vector:

(likes shortbread, likes lager, eats porridge, watched England play football, nationality)T

A vector x = (1, 0, 1, 0, 1)^T would describe that a person likes shortbread, does not like lager, eats porridge, has not watched England play football and is a national of Scottland. The final point is the class that we want to predict and takes two values: 1 for Scottish, 0 for English.

Here’s the data we’re given:


X = [ 0 0 1 1 0 ;
1 0 1 0 0 ;
1 1 0 1 0 ;
1 1 0 0 0 ;
0 1 0 1 0 ;
0 0 1 0 0 ;
1 0 1 1 1 ;
1 1 0 1 1 ;
1 1 1 0 1 ;
1 1 1 0 1 ;
1 1 1 1 1 ;
1 0 1 0 1 ;
1 0 0 0 1 ];

Notice that usually when we represent data, we write features in columns, instances in rows. If this is the case, we need to get the data in proper orientation: features in rows, instances in columns. That’s the convention. Also, we need to separate the class from the feature set:

Y = X(:,5);
X = X(:,1:4)'; % X in proper format now.

Alright. Now, that we have the data, let’s hear some theory. As always, this isn’t a tutorial on statistics. Go read about the theory somewhere else. This is just a refresher:

In order to predict the class from a feature set, we need to find out the probability of Y given X (where

X = ( x_1, x_2, ldots x_n )

with n being the number of features. We denote the number of instances given to us as m. In our example, n = 4, m = 13. The probability of Y given X is:

P(Y=1|X) = P(X|Y=1) * P(Y=1) / P(X)

Which is called the Bayes rule. Now, we make the NB assumption: All features in the feature set are independant of each other! Strong assumption but usually works. Given this assumption, we need to find P(X|Y=1), P(Y) and P(X).

(The weird braces notation that follows is the indicator notation. 1{ v } means use 1 only if condition v holds, 0 otherwise.)

P(X) = P(X|Y=1) + P(X|Y=0)

P(X|Y=1) = prod_j{P(x_i|Y=1)}

To find P(X|Y=1), you just have to find P(x_i|Y=1) for all features and multiply them together. This is where the assumption comes in. You need the assumption of independence here for this.

P(x_i|Y=1) = sum_j{1{x_i^j = 1, y^j = 1}} / sum_j{1{y^j = 1}}

This equation basically means count the number of instances for which both x_i and Y are 1 and divide by the count of Y being 1. That’s the probability of x_i appearing with Y. Fairly straight forward if you think about it.

P(Y=1) = sum_j{1{y^j = 1 }} / sum_j{1{y^j = 1, y^j = 0 }}

Same as above. Count the ratio of Y=1 with the total number of Ys. Notice that we need to calculate all these for both Y=0 and Y=1 because we need both in the first equation. Let’s begin from the bottom up. For all of below, consider E as 0 and S as 1 since we consider being Scottish as being in class 1 (positive example).

P(Y):

pS = sum (Y)/size(Y,1);     % all rows with Y = 1
pE = sum(1 - Y)/size(Y,1);  % all rows with Y = 0

P(x_i|Y):

phiS = X * Y / sum(Y);  % all instances for which attrib phi(i) and Y are both 1
              % meaning all Scotts with attribute phi(i)  = 1
phiE = X * (1-Y) / sum(1-Y) ;  % all instances for which attrib phi(i) = 1 and Y =0
              % meaning all English with attribute phi(i) = 1

PhiS and PhiE are vectors that store the probabilities for all attributes. Now that we have the probabilities, we’re ready to make a prediction. Let’s get a test datapoint:

x=[1 0 1 0]';  % test point

And calculate the probabilities P(X|Y=1) and P(X|Y=0)

pxS = prod(phiS.^x.*(1-phiS).^(1-x));
pxE = prod(phiE.^x.*(1-phiE).^(1-x));

And finally, the probabilities of P(Y=1|X) and P(Y=0|X)

pxSF = (pxS * pS ) / (pxS + pxE)
pxEF = (pxE * pS ) / (pxS + pxE)

They should add upto 1 since there are only two classes. Now you can define a threshold for deciding whether the class should be considered 1 or 0 based on these probabilities. In this case, we can consider this test point to belong to class 1 since the probability pxSF > 0.5.

And there you have it!

A much better (and useful) eclipse plugin

It’s been four days since I started working with eclipse plugins and I finally have my first useful plugin. It’s useful for my research purposes and hopefully for a small audience interested in my work. It might also be useful for those trying to learn how to write eclipse plugins because I’ll soon be writing a tutorial on how I put this thing together from scratch (inshallah).

For the time being though, enjoy the screenshot.

Update: fixed the plugin with a new ‘View’. It now operates much better with a separate view for the output and controls. Also added is a ‘Hierarchy’ view for viewing the policy in a nice tree structure.

Flashing Android Dev Phone 1

This tutorial is about flashing your Android Developer Phone 1 with your own custom build. It will provide a concise description of steps involved along with a special portion on how to port Google’s apps on your custom build. I found that particularly troublesome with little help on the Internet. So, that will be a bonus 🙂

First the disclaimer: This is for your Android Dev Phone 1 (ADP1). If you’re using T-Mobile’s SIM/firmware locked phone, stop. This tutorial is not for you. If you’re using ADP1, proceed at your own risk. You may brick your phone if you do something wrong and I shall not be held responsible for it. Finally, you might want to backup your factory-provided image. I don’t think it’s really necessary because you can just flash it again using the HTC provided images.

So, here is how it’s done:

Read More »

Getting Started with Android Dev Phone 1

We received our Google Android Dev Phone 1 yesterday and immediately ran into trouble. We don’t have a supported carrier here and we couldn’t get our own carriers to work with Android because we didn’t have the APN information. Android’s distro that comes bundled into the Dev Phone won’t let you in without an APN  though. You get a “SIM not found” message and you can’t do anything other than dial an emergency number. So, after searching for a while, I found some useful tips for getting around the problem.

First, you need to plug in your phone through the provided USB. If you’re running XP, the device will probably not be recognized. (It wasn’t for me.) So, download the Android phone driver here (or here) and install it when XP asks to search for a driver. (Thanks to anddev for this information.) After that, get the Android SDK from here. Go to command prompt and navigate to the tools directory in the SDK. Then execute these commands.

adb shell
su
cd /data/data/com.android.providers.settings/databases
sqlite3 settings.db
INSERT INTO system (name, value) VALUES ('device_provisioned', 1);
.exit
reboot

Once the device finishes rebooting,

adb shell
am start -a android.intent.action.MAIN -n com.android.settings/.Settings

Many thanks to Android Tricks for writing this tip.

Update 1: Android SDK ships with the latest version of the windows Android phone driver. You can find it in $ANDROID_SDK_HOME/usb_driver. So, you don’t need to download the driver using the links provided above.

Update 2: To get the Android device to work on Ubuntu 9.04 Jaunty Jackalope, you need to perform the following steps:

  1. sudo nano /etc/udev/rules.d/51-android.rules
  2. Add this line to the file: SUBSYSTEM=="usb", ATTRS{idVendor}=="0bb4", MODE="0666"
    (You can get the 0bb4 value from lsusb for High Tech Corporation (i.e. HTC) if you work with a different phone)
  3. sudo chmod a+rx /etc/udev/rules.d/51-android.rules
  4. sudo /etc/init.d/udev restart
  5. adb devices (to see the device)

Video Lectures Uploaded

For those who are interested in the activities of SERG but can’t come over to the end of the world (Phase 7 in Hayatabad), here’s a resource that you might benefit from. Video lectures of workshops conducted by SERG members are being uploaded online on different services. These will invariably be free and you can view them online if you have a reasonable internet connection. See the list here.

A Brief Intro to Security in Java

Disclaimer: This is not a how-to for implementing security frameworks. It will focus on the research aspects of Java Security Managers. If you need to find out how to implement the code, follow some of the references.

The Java SE platform provides a solid security framework. Aside from the cryptography libraries and Java Cryptography Extension (JCE) specification, it includes an important feature — called Security Managers — which enable a program writer (or the user) to specify the security constraints for a program.

Every call to the system resources goes through the Java Virtual Machine (JVM). The VM includes hooks, which call a Security Manager and request a decision regarding system resource calls. These calls include reading and writing files, opening sockets and listening to ports. The assigned security manager reads the security policy and decides whether the system call should be allowed. If the call is to be granted, the security manager simply returns a value (the nature of which is not important). If it is not to be allowed, a security exception is thrown, which signifies the denial of the call.

Read More »

VM-based Information Flow for Policy Enforcement

Regarding paper titled “A Virtual Machine Based Information Flow Control System for Policy Enforcement” by Nair et al. 

Nair et al. present an information flow control system which addresses the issue of implicit information flows using an extension of the Kaffe JVM. Trishul is implemented by extending Java Stack and Heap structures. The resulting framework is capable of dynamically assigning labels to objects and propagating these labels based on information and control flow. Label or “Taint” propagation is based on the Lattice based Information Flow Model by Denning.