I bought a gaming laptop and Google provides a service called Colab, So now I can learn how to use deep learning.

Several days ago, I bought a gaming laptop to do some deep learning work. My girlfriend is very happy because now she can use the new gaming laptop play the Playerunknown battleground. And I’m happy because this laptop is much faster than my Mac pro several times when you are doing deep learning tasks.

I update all the codes from tensorflow and play the codes in the tutorials one by one. And I found out now Google provides a service called Colab. Our codes can run on Google’ VM with a GPU google provides.

But a lot of people find out their codes can’t run on Colab because the system often breaks by the error of out of memory. Turns out, Google may share memory and  GPUs between your different sessions. There are some codes can help understand how much memory is free for you.

Normally Google provides a Tesla K80 with 11GB memory, so when you get lucky you may will get this information:

And sometimes your memory almost  all is used, like this:

When the bad thing is happening, you can use command kill to free memory first:

You may need to wait for several minutes, it will kill your current runtime and after you connect to a new runtime, you may be lucky to have full memory unused.

When I try the example DCGAN from tensorflow tutorials, My Mac Pro  (3.7 GHz Quad-Core Intel Xeon E5, Two AMD FirePro D300 2048 MB) needs 255 seconds to finish one epoch.

Google Colab (Nvidia Tesla K80 11GB) needs 30 seconds to finish one epoch. 

My gaming laptop (Nvidia GeForce 1050 ti 4GB) needs 34 seconds to finish one epoch.

So, Google Colab is very useful, but when you run some tasks which took too long, Colab may disconnect and you may never connect to the original session, so you may need run it again and again. So I am happy that I bought my own gaming laptop. 

See also: https://stackoverflow.com/questions/48750199/google-colaboratory-misleading-information-about-its-gpu-only-5-ram-available

Google SEO and Structured Data

If you search some specific words like “Apple pie”, Google will show some different results at the first several lines, like this:

google results of apple pie Recipe

You can find out these results contain some extra contents, like rating, votes, time, and cal, Google must know these results are recipes. But how can they know? Using some deep learning algorithm? No, this is all about structured data.

In Google IO 2017, they have a talk about structured data.

Google support a lot of structured data.

  1. Format is Microdata
  2. Google support a lot type of structured data of enhancements, like Breadcrumbs, Corporate Contacts, Galleries and Lists, Logos, Sitelinks Searchbox, Site Name, Social Profile Links, and a lot of content types, Articles, Books, Courses, Datasets, Events, Fact Check, Local Businesses, Music, Podcasts, Products, Recipes, Reviews, TV and Movies, Videos. you can find details at Search Gallery.
  3. Want to know how to use it, you can see Introduction to Structured Data.

Why Google built TPU instead invent some superpower GPU?

Deep learning researchers always think training is the core problem. Because they always lack funds to purchase the quickest machines. But Google doesn’t worry this, they just have tons of powerful machines, find resources to train a good model isn’t very hard for Google.

Win some deep learning contests isn’t the goal of Google, it is just their PR tricks. Google want to provide AI cloud services. So they kept releasing their well-trained models, Inception-v3, Word2vec, etc. Most of the customers will use API from Google’s well-trained models, like Cloud Natural Language API, Cloud Speech API, Cloud Translation API, Cloud Vision API, Cloud Video Intelligence API. Some of them will want to use models that provide by Google or other companies, or just do some fine tune. And only a little of them will want to train their model all from the beginning.

So, Google cares about service more than training, so they build TPU to speed up service, to reduce service latency.