- read

First day at Google, but for real

Jerome Li 27

When I first joined Google, I found something to be lacking… but rather than jump to another company like I’ve done so many times, I decided to do an internal transfer — find another team within the company to work for. It would be the first job I’ve taken where I actively wanted to be there, rather than someone else wanting me to be there (being headhunted by Amazon and Google) or as a result of my not wanting to be where I was (asking a friend to refer me to Microsoft). Whether or not this was a good decision was something I would learn later, but at least for now, it felt like the right thing to do.

Problems and solutions

As I expected, getting a change in my job title from Software Engineer in Test (SET) to Software Engineer (SWE) was not a big deal. It didn’t affect the nature of my day-to-day job or how my colleagues saw me. I was still an ordinary Googler with the same free food and bikes and other fun stuff. Now that I look back, Google had a “we’re all in this together” spirit which other companies could not really imitate. But I digress.

The localization (L10N) infrastructure team, without getting too much into details, was responsible for owning and maintaining the — you guessed it — localization infrastructure at Google. Well, what does it do, you ask? It is a loose collection of tools and processes that make up the localization process at Google: taking a product and translating it into different languages. As one might imagine, there are a lot of product at Google, such as Gmail, Maps, AdSense, all the various Android products, and so on.

Our team did not do all the translating — that was done by actual human translators, experts of their respective language. They would use software tools built by our team to assist in the translation process. Then, we had a pipeline for extracting the content that needed to be translated from the products, sending it to the translators, then sending the translated content back. All of this had to work with all the various teams that used our systems, each with their own tools and processes.

Yeah, it can be pretty mind-numbing stuff, but apparently it’s also a highly profitable industry. And it’s relevant, I swear.

Getting a grasp of the situation

Now, a bit about the team. There were eight engineers in L10N infrastructure, with one manager who was spread out quite thin. I was part of a smaller sub-team that focused on a particular area, so I only worked closely with two other engineers. We also worked with a PM, who was focused more on other projects in the larger organization. And of course, people came and went over time.

Our systems worked sort of okay, and have been like that for a long time. There were bugs that came up here and there, but that’s normal. However, as time went by I started putting together a picture in my mind of what kind of problems we were facing.

To begin with, everything we owned was built by a motley bunch of engineers who came and went over a period of fifteen years. There was no unified vision or oversight, or even a continuously-operating team. That makes sense today when I think about it; our team should be thought of more like a working group, which was a group of people put together to achieve some specific task. That presented two problems. One, we were on our own if we wanted to learn more about our own systems, let alone make improvements to it. Two, we would always be staffed with just enough people to keep things running, and no more — a skeleton crew. This is a major obstacle to innovation, as well as other things.

Speaking of making improvements, it was difficult for us to do just that. It would be even more difficult for us to adapt to newer, ever-changing requirements. Many products depended on us as part of their processes. Therefore, if we made any change, we needed to make sure not to disrupt existing users. We also had plenty of work on our plate — a regular stream of support (read: customer service) tickets, bugs, and feature requests.

As a result, the team was in stasis. It was “maintenance mode” all the time. I sort of knew what I was getting into when I talked to the manager prior to joining the team (he did emphasize the fact that the team had a lot of legacy systems) but I was only able to see the whole picture once I was “in.”

Parting thoughts

I confess, I may have cheated a little while writing this — I did not know all this when I first joined the team. It took me at least a year to get a grasp of the situation. That out of the way, my time at L10N infrastructure can be split into three periods. Like Beethoven! Except less glamorous. The first period was about exploration and figuring out my place. The second was where I had settled into a routine. And the third, a somewhat anticlimactic period during which I eventually formed the decision to leave the company.

You may ask, why would anyone want to leave Google? I will get there in due time. But, as they say, all good things must come to an end. More detail about what happened during the last three of my four years at Google… next time.