Background Recently, I came across this blog post written by Vicki Boykis in which she documented her process of building a Twitter bot that tweets Soviet artworks in scheduled intervals. Inspired by the idea (and motivated by boredom), I decided to build a Twitter bot myself that tells questionably coherent stories through a series of tweets. I loved this idea because first of all, I love literature, especially the classics.
On September 10, 2019, I lost my job in a mass layoff. Now, after having endured a month-long unemployment, fortunately concluded by signing another job offer today, I decided to sit down and retrospectively document my journey, which, although arguably small and inconsequential in the grand scheme of things, was important and transformative for me. As the cliché goes, you learn a lot about yourself when going through obstacles. Indeed, over the span of this month, I have subsequently come to identify my weakness, my insecurity, my solitude, and later, my resilience and my self-worth.
Tonight, I had the chance to attend a talk given by one of my favorite non-fiction writers, Yuval Noah Harari, whose latest book, 21 Lessons for the 21st Century, has quite notably set off a long, enduring existential crisis for me ever since I read it. Organized by Stanford’s Human-Centered AI organization, the talk was focused on the societal impact of AI and was carried out in a conversational style with Stanford’s own leading AI researcher, Fei-Fei Li.
Growing up in China, one of my favorite conversational topics is to complain about its education system. For context, I have 16 years of survival experience and, even though it has been more than a decade since I graduated from high school, I still have regular nightmares about failing an important exam or being punished for talking too loud during a class break (both of which have actually happened). Hence, I do consider myself a subject expert.
The title says it all.
No, seriously, this post is all about me. So much for not talking about me…
This may be cliché but, on this very last day of 2018, I want to end the year with a reflection of things that have greatly influenced me this year. These “things” come in various shapes and forms: persons, books, podcasts, courses, events, and YouTube series. Whatever they are, as long as they left a big impression on me and helped me grow, be it personally or professionally, I’m going to list them here.
Background Since the origin of the idea of attention (Bahdanau et al., 2015), it has become a norm to try to insert it in a seq2seq model, especially in translations. It is such an intuitive and powerful idea (not to mention the added benefit of peaking into an otherwise blackbox model) that many tutorials and blog posts made it sound like one should not even bother with a model without it as the results would for sure be inferior.
Background Recently, I have discovered the Second Language Acquisition Modeling (SLAM) challenge hosted by Duolingo earlier this year, where they had asked the participants to predict the per-token error rate for a given language learner based on his/her past learning history. To conclude the competition, the team has also written a paper to summarize the results and the approaches that were taken by the various participants and their respective effectiveness.
Background A few weeks ago, I experimented with building a language translator using a simple sequence-to-sequence model. Since then, I had been itchy to add an extra attention layer to it that I had been reading so much about. After many, many research, I came across (quite accidentally) this MOOC series offered by fast.ai, where on Lesson 13, instructor Jeremy Howard walked the students through a practical implementation of the attention mechanism using PyTorch.
Background After having tried my hands on LSTM and built a text generater, I became interested in the sequence-to-sequence models, particularly their applications in language translations. It all started with this TensorFlow tutorial where the authors demonstrated how they built an English-to-French translator using such a model and successfully translated “Who is the president of the United States?” into French with the correct grammar (“Qui est le président des États-Unis?
Background Lately I’ve been spending a lot of time learning about deep learning, particularly its applications in natural language processing, a field I have been immensely interested in. Before deep learning, my foray into NLP has been mainly about sentiment analysis1 and topic modeling.2 These projects are fun but they are all limited in analyzing an existing corpus of text whereas I’m also interested in applications that generate texts themselves.