Skip to content

Rhyming words using python

June 15, 2013
tags: , ,

Here is some simple python code that uses NLTK libraries to find rhyming words of a given level.

import nltk

def rhyme(inp, level):
entries = nltk.corpus.cmudict.entries()
syllables = [(word, syl) for word, syl in entries if word == inp]
rhymes = []
for (word, syllable) in syllables:
rhymes += [word for word, pron in entries if pron[-level:] == syllable[-level:]]
return set(rhymes)

print “word?”
word = raw_input()
print “level?”
level = input()
print rhyme(word, level)

Facebook feed algorithm

December 17, 2011

Most of us regularly use Social Networking tools like Facebook, Twitter, Google+ etc. Twitter and Google+ don’t order the feeds based on relevance or popularity, there are separate tools like “trending tweets” and “what’s hot” for that purpose. Facebook seems to have an algorithm which the other two popular sites don’t have, feed ranking aimed to reduce the amount of noise on your stream. In this post I have tried to reverse engineer this algorithm in order to understand why it fails in a few cases and how to break through and exploit its weaknesses. Please note that these are strictly my opinions of how the system works, however any criticism of any idea presented is welcome.

The feed ranking seems to be based on two concepts – Importance of a post, Relevance to the person.

Importance of a post is measured using the reaction it gets in the form of likes, comments and shares. Given 4 people like a post with 10 impressions, it is expected to have a lesser impact that a post that has 8 likes in 10 impressions. We can expect the comments and shares to further boost the score, although multiple comments from the same person doesn’t add much. Also these signals decay with time. A like yesterday you got for your post yesterday doesn’t probably count much today and so the post goes down in the ranking.
ImportanceScore = (∑i Ci * e-k(tnow-ti)) / NumImpressions
where k is some decaying constant which is multiplied to how long back the post was shared.  Cis are different constants for likes/comments/shares. NumImpressions is the number of times the post has been shown on screen.

Relevance to a person is important in deciding the probability a given person might like your post. I have observed that I see more posts from people who have interacted with me in the past. People whose posts I like/comment on see my posts higher up in their list. The relevance/friendship score can be determined by going through all the interactions between person X and person Y and summing them up with decaying time and weighted based on the type of interactions. This ensures that the recent interactions are more important.

Relevance/Friendship (X, Y) = ∑i Ci * e-k(tnow-ti)
where k is again a constant and is multiplied to the ‘how long back the interaction happened’. Cis are different constants of interaction (chat/message/comment/like/share/tag/share image/appear in same post/appear in same image)

In addition ‘relevance’ can also take into account the number of your friends who liked my post to predict the probability of you liking it.

Other factors that could be considered.
Social Status of the people who like/share/comment on your posts:-
People who like everything are expected to contribute less to the importance scores as opposed to people who like selectively. People with more number of friends interacting with him (note the direction of interaction), who has a higher social rank is expected to contribute more importance to your post than a guy with just one friend. The calculation of Social Rank is far more complex and I will omit the formulae and the confusion in this blog.

I have also ignored concept recognition as of now, the feature that allows FB to find out what you are talking about. For example “Indian National Cricket Team” or “Kolaveri”. Essentially I have considered only posts that talk about things that other people are not talking about. To adapt the model to include this signal, we would have to cluster all these posts together like Facebook does and use the scores of other posts in the cluster to score a new post.

So, how can we beat the system? Here is where I present some stuff that shameless people can try

  • If the first few people who see your post like it, it gets a high rank immediately and lots of people end up seeing it improving the chances of getting more ‘likes’.
  • If you get the popular social centres to get to like your posts, your posts are likely to get a higher scores.
  • Interacting (chatting with, commenting…) with more people on Facebook from different friend clusters (school, high school, college, university, work, home, relatives) could help.
  • Sharing obviously funny/likable things that are bound to get a lot of likes once in a while to improve friendship_score. (“Go Sachin Tendulkar”/ Kitten pictures / stolen jokes etc)

Other things that could likely help

  • Replacing the words Google-Plus in your posts with G_o_o-g_l_e + may be? And Sonia Gandhi with 50|\|i@ G@|\|_|) |-|i.
  • Using Images with text in them rather than using plain text as it increases that chances of being noticed.

I have just given my opinions in this post with a language which is as less geeky/mathematical as possible, you can share your opinions/other things you have observed about the algorithm in comments. Please also share if you found this interesting or funny. Nothing can please the author of an ad-free blog more than likes and comments.

Wikipedia Offline

August 6, 2011

Most of what my work online either involves checking mail or browsing forums for getting answers or reading Wikipedia for getting information or social networking. With LAN cuts introduced in the IITs, it is difficult for a student to access information after 12:10 unless they breakout somehow. In an earlier post, I had explained with references to my code, on how to download parts of Wikipedia, I thought it would be helpful to download the whole of Wikipedia on to your computer. In this post I will show you how Wikipedia / stack-overflow / gmail can be download for offline use.

Wikipedia

Requirements:

  • LAMP (Linux, Apache, MySQL, PHP)
  • Around 30 GB of space in primary partition 30 GB of space for storage. In my case the root partition
  • 7 GB of free Internet download
  • 3 days of free time

Wikipedia dumps can be downloaded from the Wikipedia site in XML format compressed in .7zip. This is around 6 GBs when compressed and expands to around 25GB of XML pages. It doesn’t include any images. This page shows how one can extract text articles from articles and construct corpuses from the same. Apart from this, a static HTML dump can also be downloaded from Wikipedia page (wikipedia-en-html.tar.7z) and this version has images in it. The compressed version is at 15 GB and it expands to over 200 GB because of all the images.

The Static HTML dump can simply be extracted to get all the HTML files and the required HTML file can be opened to view the required content. In case you download the XML dump, there is more – you have to extract the articles and create your customized offline Wikipedia.with the following steps.

  1. Download the latest mediawiki and install it on your Linux/Windows machine using LAMP/WAMP/XAMPP. Mediawiki is the software that renders Wikipedia articles using the data stored in MySQL.
  2. Mediawiki needs a few extensions which have been installed in Wikipedia.Once we have mediawiki installed say /var/www/wiki/, download each of them and install by extracting these extensions in the /var/www/wiki/extensions directory.
    The following extensions have to be installed – CategoryTree, CharInsert, Cite, ImageMap, InputBox, ParserFunctions (very important), Poem, randomSelection, SyntaxHighlight-GeSHi, timeline, wikihero which can all be found in the Mediawiki extensions download page by following the instructions. In addition you can install any template to make your wiki look like whatever you want. Now your own Wiki is ready for you to use, you can add your own articles but what we want now is to copy the original Wikipedia articles to our Wiki.
  3. It is easy to import all the data once and then construct an index for the data in MySQL than to update the index each time an article is added. Open MySQL and your database, the tables that are used in the import are text, page and revs. You can delete all the indexes on that page and create it again in the 5th step to speed up the process.
  4. Now that we have our XML database, we need to import it into the MySQL database. You can find the instructions here. In short, a summary of the instructions found on that page, the ONLY WAY you can get Wikipedia really fast on your computer is to use mwdumpertool to import into the database. The inbuilt tool in mediawiki won’t work fast and may run for several days. The following command can be used to import the dump into the database within an hour.
    java -jar mwdumper.jar --format=sql:1.5 <dump_name> | mysql -u <username> -p <databasename>
  5.  Recreate the indexes on the tables ‘page’, ‘revs’ and ‘text’ and you are done.

You can comment if you want to try the same or if you run into any problems while trying.

Stack-overflow

Requirements

  • LAMP (Linux, Apache, MySQL, PHP)
  • Around 15 GB of space in the primary partition and 15 GB of storage. In my case the root partition
  • 4 GB of free Internet download

media10.simplex.tv/content/xtendx/stu/stackoverflow has several stackoverflow zip files available for direct download. Alternatively, stack-overflow dumps can be downloaded using a torrent. A torrent download can be converted into an FTP download using http://www.torrific.com/home/. Once you have the dumps you can unpack them to get huge XML files for several stack sites. Stack-Overflow is one of the stack sites, the 7zip file is broken into 4 parts and have to be combined using a command (cat a.xml b.xml c.xml d.xml > full.xml) Once combined and extracted, we can see 6 xml files for each site (badges, comments, postHistory, posts, users, votes, ) Among these, comments, posts and votes may seem useful for offline usage of the forum. A main post may consist of several reply posts and each such post may have follow-up comments. Votes are used to rate an answer and they can be used as signals while you browse through questions. Follow the following steps to import the data into the database and use the UI to browse posts offline.

  • Download Stack sites
  • Create a database StackOverflow with the schema using the description here. (comments, posts and votes tables are enough)
  • Use the code to import the data to the database. (Suitably modify the variables serveraddress, port, username, password, databasename, rowspercommit, filePath and site in the code)
  • Run the code on Stack Mathematics to import the mathematics site. For bigger sites, it may take much more time and a lot of optimizations are needed along with a lot of disk space in the primary partition where the MySQL stores its databases.
  • Use the UI php files to view a post given the post number along with the comments and replies.
  • TODO: Additionally we can add a search engine that searches the table ‘posts’ for queries and returns post numbers which match the same.
Gmail offline
Requirements:
  • Windows / Mac prefered
  • Firefox prefered
  • 20 minutes for setup
  • 1 hour for download
Gmail allows offline usage of mails, chats, calendar data and contacts. You can follow the following simple series of steps to get gmail on your computer.
  • Install Google gears for firefox
    • You can install google gears from the site http://gears.google.com
    • If you are on Linux, you can install gears package. [sudo apt-get install xul-ext-gears]
    • Note: Gears works well in Windows, may fail on Linux
  • Login to gmail
  • Create new label “offline-download”
  • Create a filter {[subject contains: "Chat with"] or [from: <user-name>] -> add label “offline-download” to selectively download your conversations.
  • Enable offline Gmail in settings, and allow download “offline-downloa” for 5 years. You can select the period of time as well.
  • Start, it will end in around an hour and you will have your mails on your computer in an hour.
Offline gmail creates a database called [emailID]@gmail.com#database in your computer. The gears site gives you the location. You can find some information about offline GMail here.
If you want a custom interface for your mails / chats etc, you can create one which queries the SQLITE database mentioned above to present the content however you want. The software diarymaker can be used to read your chat data with plots of frequencies with time and rank your friends based on the interactivity. It works on Linux and uses the Qt platform. I will add a post on it soon.
Feel free to comment on any issue, if you have an idea for downloading any other kind of data on to your computer for offline usage, please let us know with a comment.
Update:
media10.simplex.tv/content/xtendx/stu/stackoverflow Now you can download stackoverflow directly. (Courtesy: Sagar Borkar)

Wikipedia Mining

August 6, 2011

Wikipedia has several million articles as of today and it is an excellent source for both structured and unstructured data. The Egnlish Wikipedia gets around 30K requests per second to its 3.5 million articles which is contributed by over 150 million active users. This post tries to  bring out some of my experiences in mining Wikipedia.

Infobox example

Infobox in a Wikipedia article

Wikipedia has infoboxes that are sources of structured information. The screenshot attached is an example of an infobox. One can create a quizzing application using the structured infobox data in Wikipedia. The whole of Wikipedia is huge. It is difficult to download and process the whole of it. A small section for example, cricket, can be selected and all the articles from under the category can be downloaded by recursively crawling all the sub-categories and articles in the Wikipedia article. There are several libraries that can be used for the same. In python, a library called beautiful soup can achieve the same.

The code here prints a category tree. A category page like this has a list of sub-categories, expandable bullets and a list of pages. The function printTree is called on each article. It lists and downloads the ‘emptyBullets’ (leaf-categories) and ‘fullBullets’ (expandable-categories) and recursively calls the function for the sub-category. It avoids re-downloading duplicate pages. The getBullets function uses the Beautiful soup library to get all ‘<a>’ tags in the HTML article which have tags ‘CategoryTreeEmptyBullet’ and ‘CategoryTreeBullet’, the other functions seem simple enough to comprehend. The outcome of this function is the Category tree in Wikipedia. The function ‘download’ which uses wget can be replaced with a better download function that probably uses curl or some python HTTP library.

I have attempted to create a simple rule based ‘Named Entity Recognition’ program to classify these articles into people / organizations / tours and so on. This python file does the same. Here is a named-entity tagged version of the category.

Using the list of categories, we can get the list of articles in each category using this file, and the articles can be downloaded using this file. The code is pretty simple and I am mentioning these files here for the sake of completion. The body(article content) can be extracted using this file.

This file takes in an article and extracts the information in the infobox and outputs them in a structured format. I have written custom processors for processing infoboxes of people and infoboxes of people and matches and so on.

Here are examples of final outputs. (Article body of Sachin Tendulkar, Infobox extracted from article Sachin Tendulkar, an infobox on the Indian tour of England in 2002.)

These outputs are in my own structure but they can be standardized to some format for example XML/JSON/RDF. A script can be used to do the same.

Google_menu

May 19, 2010

Hi this post is on the menu for my lunch at Google in the afternoon today, 19th May 2010. The menu is too big for me to read, hope you guys can read it and tell me about it.
Enjoy!

WELCOME DRINK

*Mango Milkshake ( 90 Kcals) <!– Vanilla Ice Cream,Mango Puree,Chopped Mango

APPETIZERS- VEGETARIAN

**Aloo Mutter Tikki(45 kcals)  Mash Potato,Mash Green Peas,Green Chilly,Spices.

*Mini Pizza(90 kcals ) <!– Pizza Base,Spinach,Artichoke,Tomato, Mediterranean Spices

**Pumpkin Kibbeh ( 70 kcals) Pumpkin,Broken wheat,Bread Crumbs, Mediterranean spices

APPETIZERS- NON- VEGETARIAN

Machi Amritsari(110 kcals) <!– Sea Bass,Ajwain,Besan Flour,Malt Vinegar,Red Chilly Paste,Coriander Powder,Turmeric,Lemon

SOUP:VEGETARIAN

**Greengram Sundal ( 50 kcals) Green gram,Coconut,South Indian Tempering.

**Minestrone Soup (40 kcals ) Vegetable stock,Tomato,Pasta,Garlic,Shallots, Zucchini,Cabbage

SALADS

R**Tossed Salad (20kcals) Tomato,Color Peppers, Lettuce,Onion, Lime Dressing

R**Mixed Lettuce Salad (10kcals) Mesclun, Lollo Roso, Ice berg, Romaine

R**Sprouted Beans(10Kcals) Sprouted Beans

**Carrot And Raisin Parsley Salad (30 kcals) Carrot,Raisin,Lemon,Parsley

**Tomato Bocconchini (120 kcals ) Tomato, Bocconchini cheese,Pesto,Crushed Pepper.

*Pasta Salad (100kclas) <!– Farfalle ,Parmesan Cheese,Chilly Flakes,Crush Pepper,Color
Zucchini,Color Peppers, Creamy Pesto Dressing.

*Mixed Vegetable Raita (40 kcals ) <!– Mixed Vegetable,Low Fat Yogurt,Jeera Powder,Green Chilly

*Poppy seed Vinaigrette(5kclas) <!– Poppy seed Paste,Extra Virgin Olive Oil,Lemon Juice

*Sour Cream Cilantro Dressing(5kcals) <!– Sour Cream,Cilantro Chutney

MAIN COURSE: NORTH INDIAN

*Gobi Mutter Pudina(70 kcals) Cauliflower,Green Peas,Capsicum,Onion,Ginger,Garlic,Lemon ,Turmeric,Green Chilly.

*Gatte ka saag (105 kcals) <!– Besan FLOUR,Ajwain,Green Chilly,Onion,Tomato,Green Chilly,Low Fat Yogurt,Green Chilly,Ginger,Garlic.

*Jeera pulao ( 135 kcals ) <!– Basmati Rice,Cumin Seed,Green Chilly,Onion,,Milk,Brown Onion,Ghee, Spices

*Rajmah Masala (105 kcals) <!– Rajmah,Onion,Tomato,Ginger,Garlic,Green Chilly,Red Chilly, seasoning.

MAIN COURSE: SOUTH INDIAN

**Beans Usli(50 kcals) Beans, Bengal gram,Garlic,Ginger,Green Chilly ,Shallots,South Indian seasoning.

**Mango Rice( 130 kcals) <!– Jeera Samba Rice,Mango extract,peanut,South Indian tempering, Sunflower oil.

**Rasam (30kcals) Tomato,Garlic,Pepper,Cumin seeds,Curry Leaves,Coriander Leaves,Turmeric,salt.

*Mixed Vegetable Sambar(80kcals) <!– Snake Gourd,Radish,Red Pumpkin ,Pigeon Pea,Garlic,Shallots,Sambar Masala,Curry Leaves,Coriander seeds,

Steamed Rice(120kcals) Sona Masoori rice

*Curd Rice <!–

JAIN

*Mixed Veg Masala(Jain) <!– Beans,Green Peas,Cauliflower,Cumin seeds,Capsicum,Green Chilly,Peppers.

**Red Rice(120 kcals) Red Rice

*Channa dal Tadka(95kcals)(Jain) <!– Channa dal,Tomato,Green Chilly,Red Chilly, seasoning.

INTERNATIONAL

*Cannelloni with Roast Tomato and Sage Sauce ( 120 kcals) <!– NON- VEGETARIAN

Butter Chicken( 140 kcals ) <!– Chicken,Ginger,Garlic,,Tomato,Coriander,Kashmiri Chilly,Whole Garam Masala,Butter,Cream.

PASTA STATION

Choice of Pasta and Sauces

PASTA

**Farfalle <!–

**Penne <!–

SAUCES

**Neapolitana Tomato,Extra Virgin Olive Oil,Shallots,Garlic,Italian Basil, Salt and freshly ground pepper

**Primavera Color Pepper,Color Zucchini,Aubergine,Plum Tomato,Italian Basil,Shallots,Garlic,Italian Basil,Salt and Freshly Ground Black Pepper

**Arrabiata Tomato,Extra Virgin Olive Oil,Shallots,Garlic,Italian Basil, Salt and freshly ground pepper,Chilly Flakes, Oregano

**Siciliana Aubergine,Plum Tomato,Italian Basil,Shallots,Garlic,Italian Basil,Salt and Freshly Ground Black Pepper,Capers,Parsley

*Alfredo & Spinach <!– Spinach,Cream,garlic,Shallots,Ground Pepper,Butter , parmesean cheese.

*Basil Pesto <!– Basil,Garlic.Pine Nuts,Extra Virgin Olive Oil, ground black pepper, grated Parmesan

*Cilantro pesto <!– Cilantro,Garlic.Pine Nuts,Extra Virgin Olive Oil, ground black pepper, grated Parmesan

CHOICE OF VEGETABLES

(Grilled Yellow Squash,Zucchini,Asparagus,Grilled Color Peppers, Cherry
Tomato,Grilled Aubergine,Scallions,Broccoli,Olives) (Black and gree

PIZZA STATION

Choice of Pizza and Toppings <!–

INDIVIDUAL TOPPING(Grilled Yellow Squash,Zucchini,Asparagus,Grilled Color Peppers, Cherry Tomato,Grilled Aubergine,Scallions,Broccoli,Olives) (Black and green)

SPECIAL OF THE DAY

*PIZZA ALLE VERDURE <!– (Tomato,Grilled eggplant, zucchini, capsicum & mushroom, Mozzarella Cheese)

*PIZZA MEDITERRANEAN (Tomato, black kalamata olives, garlic and oregano,Mozzarella Cheese. )

*PIZZA Neapolitana (Tomato, Zuchinni, Yellow squash, Tomatoes,Olives, onions and garlic, Mozzarella Cheese.)

ACCOMPANIED WITH

Extra Virgin Olive Oil,Balsamic Vinegar,Grated Parmesan Cheese,Red Pepper Flakes (Garlic bread rolls/croutons available on request)

INDIAN BREAD BASKET

*Phulkha <!– Wheat Flour,Milk

*Butter Naan( 150 kcals) <!– Maida,water,Garlic,ioButter.

*Javar Roti (85 kcals) <!– Javar Flour,Water,Milk,Salt

DESSERT

*Mysore Pak (145 kcals) <!– Besan Flour,Ghee,Sugar,Nuts,Cardamom Powder

*Swiss Roll(165 kcals) <!– R**Seasonal Cut Fruits

Google_Intern

May 11, 2010

In another 6 days, my summer internship program at Google will begin. I have been waiting for over 7 months for this to happen. Google booked a flight for me to get to Bangalore and a cab to take me home from the airport. I am expecting the best from Google. So how did I get here? Since this is my tech blog and not the philosophical one, this post is about how I got my internship at Google.

I faced 3 rounds of telephonic interviews involving questions on algorithms and problem solving in a span of 3 weeks. This was my second interview. The first one by Microsoft did not go that well. I RG-ed myself in IITM lingo (caused my own failure). Learning from my mistakes, I was well prepared for the second one. One thing that I learnt from my mistakes was that I did not speak much about myself. I thought of everything that I had done and planned on what to speak about myself with the interviewer after that. I had been SPOJing out of interest and addiction. That did help me a lot.

My first interview lasted around 50 minutes. I spent the first 15 minutes introducing myself. Then started the programming round where I had to write a piece of code for the atof function. Although the task seems simple, converting an error free string to a floating point number, the actual work involved writing error free code and tracing the code like a machine and analysing the function to find the exact number of multiply-divide operations. My initial code took 2*N steps, I had to reduce it to 1*N multiplications using simple optimisation principles. The next question involved mapping an arbitrarily sized string to a number. That is quite trivial as its just conversion of bases. The final question involved generation of all possible subsets for a given set. A binary tree with recursion or a bit-vector iteration from 0 to 2^N are simple ways of generating all possible subsets of a set of size N. The interviewers were happy and I was selected for the next round scheduled after a week and a half.

The difficulty level of the questions increased with each round. This is clear considering the simplicity of the questions in the first round. The second round was interesting. The algorithmic question asked me to find out the total number of ways in which 2*N number of people sitting around a round table can shake hands so that no hands cross. I wasn’t aware about this problem and it turned out to have some interesting results as I discovered during the problem solving session. I quickly suggested a recursive approach where we divide the table, recursively calculate the value for the two halves, multiply them and sum it over all possible divisions of the table. The image in this link should make it clear. This was done in the first few minutes and I suggested a pseudo-code for the same. The interviewer then asked me to make it more efficient and gave me a hint asking me the number of times the value F(10) was calculated while calculating f(20). That suggested I had to use dynamic programming and memoisation. I suggested an alternate version for the same where you maintain an array ‘F’ and remember the outputs of the function ‘f’. I further suggested an iterative alternative for the recursion where f(n) was calculated after all f(i), i < n are calculated. The interviewer gave the last hint through which I realised that I was calculating nothing but Catalan numbers (A simple recurrence relation). The rest of the interview involved some simple questions on C++ language right out of the text book. (Private constructors, static functions, object method invocation after destruction …) The second interview was over in just half an hour.

The final interview was scheduled a couple of days after the second. The first question was a graph theoretical question where I had to find the maximum number of edges in an DAG. The right side implication is just a proof by example and the left side implication was proving that only nC2 pairs of unordered pairs (a, b) can be constructed. The next question was the first one where I actually wrote code. I had to write a program to find the number of 1′s (S(n)) in the series (0, 1, 10, 11, 100, 101 … B(n)) where B(n) is the binary representation of n. Eg S(5) is the number of 1′s in (0, 1, 10, 11, 100, 101). S(5) = 7.

I  remembered something that I learnt in my combinatorics class which suggested S(2^k – 1) is easy to calculate. S(2^k – 1) = k * 2^(k-1) is a very easy formula to derive using summation and I did that in the first 5 minutes.

000  001
010  011
100  101
110  111

S(7) = 3*(2^3) = 12

I had to write code which worked for any n. This involved extracting the first bit of the number and processing the rest of it. I had a vague recurrence relation in my mind and I spent over half an hour trying to ‘write’ the code which worked for all end cases. There was a small bug in my code and I could not fix it in time and the interviewer moved on to the next question. I could have typed out the code in my computer and executed and tested instead of manually tracing the program with code written on a piece of paper.

The last question involved suggesting a strategy for Google to suggest alternate search results in case the user makes an error. The “Did you mean …” part in a google page like here. I suggested naive strategies like flipping the characters in the query to see if better search results are obtained. The idea is clearly not scalable because of the sheer number of ways the user might have made the errors. I further went on to suggest using the keyboard key-placement and restricting the character flipping to just the neighbouring keys. This again wasn’t good enough. The interviewer wanted a better approach. I though for a while and suggested a directed graph structure for queries where there are directed edges from one string to another if there is a high probability of the suggestion. The edge-weights could be proportional to the probability of error. The closest few neighbours of a given node would list the most likely queries. A simple BFS could reveal the closest neighbours. The problem now was in the assignment of weights to the edges for which I had to suggest an algorithm. I thought on the lines of people querying for something, not being happy with the results, changing the query immediately to get a good response. The edge between query1 and query2 could be inversely proportional to the number of times the same happens. The interviewer was pleased with the approach. I then requested him to give me a few minutes to complete the program. I fixed the bug and submitted the code and it worked great in logarithmic time. (Linear on the number of bits in ‘n’)

In a couple of weeks, the HR called me up and asked me to send my certificates and informed me that I had been selected for internship at Google. :D

Update: http://picasaweb.google.com/photos.jobs/IndiaOfficePhotos# has a collection of images that describe the environment in Google. Looks awesome!

exebit

October 15, 2009

This is a post publicizing exebit, the computer science department festival of IIT Madras. It will be conducted in the month of February next year. As I am one of the cores for the event, the web-ops core, I am posting this blog.

header_black

The first event was held this february and there was a decent turn-up for all talks and events. This year it is expected to be much bigger.

Here are a list of events that will be held in exebit.

Online events

Onsite events
Huge cash prizes will be given away for all the events. The description for the events can be found in the links. For more information about exebit visit http://www.exebit.org
Last year had big sponsors like nvdia, mac, maples, Ericcson, cadence … sposoring it. The events last year included
Online events
Onsite events
Hope exebit is going to be a big success this time. The cores for exebit are striving hard for that.
Follow

Get every new post delivered to your Inbox.

Join 471 other followers