Update: If you are interested in getting a running start to machine learning and deep learning, I have created a course that I’m offering to my dedicated readers for just \$9.99. Practical Deep Learning with Keras and Python .

So you’ve been working on Machine Learning and Deep Learning and have realized that it’s a slow process that requires a lot of compute power. Power that is not very affordable. Fear not! We have a way of using a playground for running our experiments on Google’s GPU machines for free. In this little how-to, I will share a link with you that you can copy to your Google Drive and use it to run your own experiments.

## Colaboratory

### Backup Using rsync

Here’s a mini howto on backing up files  on a remote machine using `rsync`. It shows the progress while it does its thing and updates any remote files while keeping files on the remote end that were deleted from your local folder.

```rsync -v -r --update --progress -e ssh /media/nam/Documents/ nam@192.168.0.105:/media/nam/backup/documents/
```

Here,  `/media/nam/Documents/` is the local folder and `/media/nam/backup/documents/` is the backup folder on the machine with IP `192.168.0.105`.

### Geting started with Hadoop 2.2.0 — Building

I wrote a tutorial on getting started with Hadoop back in the day (around mid 2010). Turns out that the distro has moved on quite a bit with the latest versions. The tutorial is unlikely to work. I tried setting up Hadoop on a single-node “cluster” using Michael Knoll’s excellent tutorial but that too was out of date. And of course, the official documentation on Hadoop’s site is lame.

Having struggled for two days, I finally got the steps smoothed out and this is an effort to document it for future use.

### AVL Tree in Python

I’ve been teaching “Applied Algorithms and Programming Techniques” and we just reached the topic of AVL Trees. Having taught half of the AVL tree concept, I decided to code it in python — my newest adventure. Bear in mind that I have never actually coded an AVL tree before and I’m not particularly comfortable with python. I thought it would be a good idea to experiment with both of them at the same time. So, I started up my python IDE (that’s Aptana Studio, btw) and started coding.

For the newbie programmer, the code itself may not be very useful since you can find better code online. The benefit is in being able to look at the process. You can take a look at the commits I made along the way over here on github. You can take a look at how I structured the code when I began and how I added bits and pieces. This abstraction should help in solving other problems as well. The final code (along with a rigorous unit test file) can be seen here: https://github.com/recluze/python-avl-tree

### Creating UML Sequence Diagrams with TikZ in LaTeX

Update: If you are interested in getting a concise intro to LaTeX along with my tips on best practices, checkout my course (for just \$12.99) on Udemy here.

I’ve been working on my LaTeX skills for some time. The goal is to move towards an all-latex solution for writing research papers, slide sets and other documents. I’m almost there. An important goal was to be able to create all sorts of figures within LaTeX. (Well, originally, the goal was to use open source softwares to create them but it turns out that LaTeX is really good at this stuff.) The package that I’m using for graphics creation is TikZ. Here we’ll cover how we can create sequence diagrams using TikZ and a plugin package.

Here’s what we’re planning on creating.

Sequence Diagram using TikZ (click to enlarge)

### LaTeX Screencasts

I’ve started putting together a couple of screencasts for those who want to start working with LaTeX. These are aimed at the extreme newbie who wants to learn the basics and get up to speed with the typesetting tool. I’ll be updating this post as I put more videos online inshallah. For now, see the videos below or on Youtube. For best results, view in HD at full screen.

Part I: Introduction

Part II: Creating your first document

Part III: Bibliographies, Class Files for Conference Styles

### Site Re-design

New Site Design

Last time I did a custom re-design for my site was way back during my blogspot time. That was in 2006 — five years have passed but I still like the design. When I moved to wordpress.com, I didn’t have a way of creating my own design so I stuck with the best design I could find. I moved to my own host here at CSRDU last year but didn’t really feel the need to create a custom design. The result, even with the great theming mechanism provided by WordPress, I never wrote a custom theme for my site. I always stuck with existing freely-available themes that always left me wanting more in one department or another. Either the typography wasn’t up to par or I didn’t like the comments layout. So, I always had to settle with whatever I could find.

That changed a couple of days ago when I came across a typography post on some blog which inspired me to begin my own wordpress theme. I had one clear goal in mind — improve readability. People come to my site mostly to read the tutorials. It’s not fair if I give text secondary importance focusing on the layout. So, I started customizing the sandbox wordpress theme. It has the cleanest markup and I was able to make all the changes simply through a custom CSS. I went with a fairly large serif font (Georgia) for the content with a sans-serif (Open Sans) font coming from Google Webfonts for the post titles. I also have a slight text shadow effect but it wont’ be visible if you’re using IE. There’s only around 5 images in the whole theme plus two fonts. So, the overall result is a fairly lean page with clear fonts and layout.

As always, all comments and criticism is most welcome.

### Google Has Messed Up Social Once Again

I know I might be in the minority right now but that’s how I feel. It seems Google has learned little from Wave and Buzz. Here’s what I think has gone wrong this time.

First, Google engineers have probably never heard of the phrase, “less is more”. They tried doing everything with Wave — everyone knows how that turned out. They’re doing the same thing with Google+ (or Google Plus). It’s twitter, friendfeed, skype, facebook and slashdot all rolled into one. The problem is, I don’t know which one I’m using when I navigate to the G+ interface.

I know I can divide people into circles and keep them separate but I don’t know if I can keep track of it all. I have separate ‘circles’ for friends and ‘professional connections’. Most often, though, I want to share a thought with both of them so I just post that to the  ‘public’ circle. My friends, goofy as they are, start commenting on the post and it quickly turns into a dorm room crap fest. That’s not the ‘professional’ image I want to project — that was the whole point of circles. The solution, post the same thing twice, once to public and again to friends. But then, why don’t I just go over to twitter and post there?

That, I think is the core of the problem. Why would anyone want to use Google+ — after the initial awe of the cool interface for dropping your friends in a circle subsides? For sharing news — I already have a neat little twitter account for that. It’s streamlined and it does what it’s supposed to do. When I’m there, I know what I’m there for. I don’t get distracted by comments from my goofy friends. Well, how about keep tabs on my friends? I don’t use facebook myself but last time I checked a lot of people were already using that social network. Just as people haven’t jumped the Yahoo! mail ship despite the immense impotence of Yahoo!, I don’t see why they’d move everything over from Facebook over to Google+. Not everyone likes to play with new and shiny geek toys.

And that brings me to the second point: Google engineers just can’t shake the geek within them. They think everything will be adopted if it’s similar enough to Gmail. They tried doing this with Wave. They did the same thing with Buzz, integrating it too tightly with Gmail and that was a fiasco. Now, they’re doing this with Google+. It’s all about how cool the technology is. They’re going to release the API soon.  That’s all great but what about the social aspects? I don’t see any incentives for moving away from my existing social networks — except maybe Buzz. So, I don’t think Google+ is a facebook killer or a twitter killer. It might be a  Buzz killer but that too is a maybe.

### A Basic Naive Bayes classifier in Matlab

Update: If you are interested in getting a running start to machine learning and deep learning, I have created a course that I’m offering to my dedicated readers for just \$9.99. Access it here on Udemy. If you are only here for Matlab, continue reading =]

This is the second in my series of implementing low-level machine learning algorithms in Matlab. We first did linear regression with gradient descent and now we’re working with the more popular naive bayes classifier. As is evident from the name, NB it is a classifier i.e. it sorts data points into classes based on some features. We’ll be writing code for NB using low-level matlab (meaning we won’t use matlab’s implementation of NB). Here’s the example we’ve taken (with a bit of modification) from here.

Consider the following vector:

(likes shortbread, likes lager, eats porridge, watched England play football, nationality)T

A vector $x = (1, 0, 1, 0, 1)^T$ would describe that a person likes shortbread, does not like lager, eats porridge, has not watched England play football and is a national of Scottland. The final point is the class that we want to predict and takes two values: 1 for Scottish, 0 for English.

Here’s the data we’re given:

``` X = [ 0 0 1 1 0 ; 1 0 1 0 0 ; 1 1 0 1 0 ; 1 1 0 0 0 ; 0 1 0 1 0 ; 0 0 1 0 0 ; 1 0 1 1 1 ; 1 1 0 1 1 ; 1 1 1 0 1 ; 1 1 1 0 1 ; 1 1 1 1 1 ; 1 0 1 0 1 ; 1 0 0 0 1 ]; ```

Notice that usually when we represent data, we write features in columns, instances in rows. If this is the case, we need to get the data in proper orientation: features in rows, instances in columns. That’s the convention. Also, we need to separate the class from the feature set:

[sourcecode lang=”matlab”] Y = X(:,5);
X = X(:,1:4)’; % X in proper format now.
[/sourcecode]

Alright. Now, that we have the data, let’s hear some theory. As always, this isn’t a tutorial on statistics. Go read about the theory somewhere else. This is just a refresher:

In order to predict the class from a feature set, we need to find out the probability of Y given X (where

$X = ( x_1, x_2, ldots x_n )$

with n being the number of features. We denote the number of instances given to us as m. In our example, n = 4, m = 13. The probability of Y given X is:

$P(Y=1|X) = P(X|Y=1) * P(Y=1) / P(X)$

Which is called the Bayes rule. Now, we make the NB assumption: All features in the feature set are independant of each other! Strong assumption but usually works. Given this assumption, we need to find $P(X|Y=1), P(Y) and P(X)$.

(The weird braces notation that follows is the indicator notation. $1{ v }$ means use 1 only if condition v holds, 0 otherwise.)

$P(X) = P(X|Y=1) + P(X|Y=0)$

$P(X|Y=1) = prod_j{P(x_i|Y=1)}$

To find $P(X|Y=1)$, you just have to find $P(x_i|Y=1)$ for all features and multiply them together. This is where the assumption comes in. You need the assumption of independence here for this.

$P(x_i|Y=1) = sum_j{1{x_i^j = 1, y^j = 1}} / sum_j{1{y^j = 1}}$

This equation basically means count the number of instances for which both x_i and Y are 1 and divide by the count of Y being 1. That’s the probability of x_i appearing with Y. Fairly straight forward if you think about it.

$P(Y=1) = sum_j{1{y^j = 1 }} / sum_j{1{y^j = 1, y^j = 0 }}$

Same as above. Count the ratio of Y=1 with the total number of Ys. Notice that we need to calculate all these for both Y=0 and Y=1 because we need both in the first equation. Let’s begin from the bottom up. For all of below, consider E as 0 and S as 1 since we consider being Scottish as being in class 1 (positive example).

P(Y):

[sourcecode lang=”matlab”] pS = sum (Y)/size(Y,1); % all rows with Y = 1
pE = sum(1 – Y)/size(Y,1); % all rows with Y = 0
[/sourcecode]

P(x_i|Y):

[sourcecode lang=”matlab”] phiS = X * Y / sum(Y); % all instances for which attrib phi(i) and Y are both 1
% meaning all Scotts with attribute phi(i) = 1
phiE = X * (1-Y) / sum(1-Y) ; % all instances for which attrib phi(i) = 1 and Y =0
% meaning all English with attribute phi(i) = 1
[/sourcecode]

PhiS and PhiE are vectors that store the probabilities for all attributes. Now that we have the probabilities, we’re ready to make a prediction. Let’s get a test datapoint:

[sourcecode lang=”matlab”] x=[1 0 1 0]’; % test point
[/sourcecode]

And calculate the probabilities P(X|Y=1) and P(X|Y=0)

[sourcecode lang=”matlab”] pxS = prod(phiS.^x.*(1-phiS).^(1-x));
pxE = prod(phiE.^x.*(1-phiE).^(1-x));
[/sourcecode]

And finally, the probabilities of P(Y=1|X) and P(Y=0|X)

[sourcecode lang=”matlab”] pxSF = (pxS * pS ) / (pxS + pxE)
pxEF = (pxE * pS ) / (pxS + pxE)
[/sourcecode]

They should add upto 1 since there are only two classes. Now you can define a threshold for deciding whether the class should be considered 1 or 0 based on these probabilities. In this case, we can consider this test point to belong to class 1 since the probability pxSF > 0.5.

And there you have it!