Deep learning researchers always think training is the core problem. Because they always lack funds to purchase the quickest machines. But Google doesn’t worry this, they just have tons of powerful machines, find resources to train a good model isn’t very hard for Google.
Win some deep learning contests isn’t the goal of Google, it is just their PR tricks. Google want to provide AI cloud services. So they kept releasing their well-trained models, Inception-v3, Word2vec, etc. Most of the customers will use API from Google’s well-trained models, like Cloud Natural Language API, Cloud Speech API, Cloud Translation API, Cloud Vision API, Cloud Video Intelligence API. Some of them will want to use models that provide by Google or other companies, or just do some fine tune. And only a little of them will want to train their model all from the beginning.
So, Google cares about service more than training, so they build TPU to speed up service, to reduce service latency.