Practical Deep Learning with Keras and Python (New Video Course)

I’ve just finished creating a new video course on Udemy about Practical Deep Learning with Keras and Python. It’s aimed at two types of people:

  1. Those who are just coming to machine learning and deep learning and want a soft (code-based introduction) as opposed to the mathematical treatment typically given to the subject.
  2. Those who have had ML/DL before but have trouble applying the concepts in code.

For the dedicated readers of my blog, I’m making it available at the minimum price of just $9.99. Please use the following coupon link to access it at this price.

https://www.udemy.com/practical-deep-learning-with-keras/?couponCode=RECLYBLOG

If you want to receive updates about content uploads, coupons and promotions, please subscribe to my mailing list here on mailchimp.

 

Deep Learning Experiments on Google’s GPU Machines for Free

Update: If you are interested in getting a running start to machine learning and deep learning, I have created a course that I’m offering to my dedicated readers for just $9.99. Practical Deep Learning with Keras and Python .

So you’ve been working on Machine Learning and Deep Learning and have realized that it’s a slow process that requires a lot of compute power. Power that is not very affordable. Fear not! We have a way of using a playground for running our experiments on Google’s GPU machines for free. In this little how-to, I will share a link with you that you can copy to your Google Drive and use it to run your own experiments.

BTW, if you would like to receive updates when I post similar content, please signup below:

Signup for promotions, course coupons and new content notifications through a short form here.

Colaboratory

First, sign in to an account that has access to Google Drive (this would typically be any Google/Gmail account). Then, click on this link over here that has my playground document and follow the instructions below to get your own private copy.

Backup Using rsync

Here’s a mini howto on backing up files  on a remote machine using rsync. It shows the progress while it does its thing and updates any remote files while keeping files on the remote end that were deleted from your local folder.

rsync -v -r --update --progress -e ssh /media/nam/Documents/ nam@192.168.0.105:/media/nam/backup/documents/

Here,  /media/nam/Documents/ is the local folder and /media/nam/backup/documents/ is the backup folder on the machine with IP 192.168.0.105.

How to Access Google Adsense Reports

So, Admob was acquired a while ago by Google and it was recently announced that the publisher reports by Admob would no longer be available through the old APIs. Instead, they now have to be retrieved through the AdSense API — which is based on OAuth 2.0 and thus a real pain for those just getting started.

Turns out, the process is quite straight-forward but extremely poorly documented. You can go through the AdSense reporting docs, the Google API library and the OAuth 2.0 specs but you would soon be lost. After spending a couple of days decoding the requirements, I found out the bare-metal approach to accessing the stats. And here is how.

Hadoop 2.2.0 – Single Node Cluster

We’re going to use the the Hadoop tarball we compiled earlier to run a pseudo-cluster. That means we will run a one-node cluster on a single machine. If you haven’t already read the tutorial on building the tarball, please head over and do that first.

Geting started with Hadoop 2.2.0 — Building

Start up your (virtual) machine and login as the user ‘hadoop’. First, we’re going to setup the essentials required to run Hadoop. By the way, if you are running a VM, I suggest you kill the machine used for building Hadoop and re-start from a fresh instance of Ubuntu to avoid any issues with compatibility later. For reference, the OS we are using is 64-bit Ubuntu 12.04.3 LTS.

Geting started with Hadoop 2.2.0 — Building

I wrote a tutorial on getting started with Hadoop back in the day (around mid 2010). Turns out that the distro has moved on quite a bit with the latest versions. The tutorial is unlikely to work. I tried setting up Hadoop on a single-node “cluster” using Michael Knoll’s excellent tutorial but that too was out of date. And of course, the official documentation on Hadoop’s site is lame.

Having struggled for two days, I finally got the steps smoothed out and this is an effort to document it for future use.

Row-level Permissions in Django Admin

So you’ve started working with Django and you love the admin interface that you get for free with your models. You deploy half of your app with the admin interface and are about to release when you figure out that anyone who can modify a model can do anything with it. There is no concept of “ownership” of records!

Let me give you an example. Let’s say we’re creating a little MIS for the computer science department where each faculty member can put in his courses and record the course execution (what was done per lecture). That would be a nice application. (In fact, it’s available open source on github and that is what this tutorial is referring to.) However, the issue is that all instructors can access all the course records and there is no way of ensuring that an instructor can modify only the courses that s/he taught. This isn’t easily possible because admin doesn’t not have “row-level permissions”.

AVL Tree in Python

I’ve been teaching “Applied Algorithms and Programming Techniques” and we just reached the topic of AVL Trees. Having taught half of the AVL tree concept, I decided to code it in python — my newest adventure. Bear in mind that I have never actually coded an AVL tree before and I’m not particularly comfortable with python. I thought it would be a good idea to experiment with both of them at the same time. So, I started up my python IDE (that’s Aptana Studio, btw) and started coding.

For the newbie programmer, the code itself may not be very useful since you can find better code online. The benefit is in being able to look at the process. You can take a look at the commits I made along the way over here on github. You can take a look at how I structured the code when I began and how I added bits and pieces. This abstraction should help in solving other problems as well. The final code (along with a rigorous unit test file) can be seen here: https://github.com/recluze/python-avl-tree

A No-nonsense OpenERP Installation Guide

This is a no-nonsense guide to the installation of OpenERP — the popular open source and customizable ERP solution — aimed at the complete newbie. Of course, there has to just a little bit of “nonsense” to get you started. So, here it is: (a) You need to have PostgreSQL installed as the database backend for OpenERP. (b) OpenERP is written in python so you’ll need some packages for that part. (c) There is a server and a client. The server is important — client can be both a desktop client or a web client. (d) We’ll cover all of this except the web client. You don’t need that to get started. (e) We’re using OpenERP on Ubuntu 11.10 but an older version should also work.