Intended obliviousness and unintended consequences

Tonight, I had the chance to attend a talk given by one of my favorite non-fiction writers, Yuval Noah Harari, whose latest book, 21 Lessons for the 21st Century, has quite notably set off a long, enduring existential crisis for me ever since I read it. Organized by Stanford’s Human-Centered AI organization, the talk was focused on the societal impact of AI and was carried out in a conversational style with Stanford’s own leading AI researcher, Fei-Fei Li. The two make a particularly interesting pair as each represents a distinct party in the context of AI: Li as the engineer and Harari as the philosopher. Although not necessarily a debate, each has held their ground well and provoked each other enough to make the conversation stimulating. In the end, despite having read his books and mulled these questions over for months, I still walked away with my brain set on fire. Sitting on the train back home, I felt the urge to pull out my laptop and started putting my thoughts down.

Before entering the heart of the matter, I’d like to comment on the huge popularity of Harari, which managed to accumulate from all directions and grow unhindered. Personally, I have always felt ambiguous about treating public intellectuals as rock stars. On the one hand, I see the value of popularizing the “big ideas” concocted by scholars in the ivory tower to make them more accessible to the general public and, in exchange, to help them develop and concretize themselves. Seeing people lining up outside the auditorium long before the door opened and having their attention fully captured by the conversations on stage, with a fervor that possibly rivals a real rock concert, the goal is more than reached. On the other hand, I am wary whenever a public figure gets unanimously worshipped like this because everything he/she says still needs to be challenged. Throughout his book, Harari urges us to think for ourselves, but are we ironically abandoning his advice by directly copying his opinions instead? Many of us have a tendency to follow a thought leader and Harari, intellectual, brilliant, and accessible, comes as a perfect one. What’s particularly interesting is that he has garnered an especially strong following in Silicon Valley, the exact spot that churns out the type of technology that he vigorously warns us against. Are the tech giants that strongly endorse him and eagerly invite him onsite to give private talks to their employees really intended to work with him and educate their employees? Or is it simply a PR facade for them to appear modern, liberal, and socially conscious both to the public and to their employees? The intention is unclear. That being said, Harari is not pedagogical and he presents more questions than solutions, which makes it hard for anyone to follow him blindly, twist his words, or turn them into bullet-pointed cultural values.

Without further ado, I’ll reflect on the key topics of the conversation, namely, 1) is AI potentially bad for society? 2) If so, what should we do about it?

Is AI potentially bad?

Before determining whether AI is bad or not, we need to decide whether AI as of today is powerful enough to be potentially bad. As a practitioner, I agree with Li that the media has extrapolated too much the potential of this still nascent technology and, in reality, it is really not that powerful yet. In the meantime, I also agree with Harari that technology never needs to be that powerful to make a dent. In the context of personalization, for example, it just needs to know one better than oneself in order to successfully manipulate the latter. Indeed, we are being willingly tricked every day to buy the things that we don’t really need, to watch the shows that we don’t particularly care about, and to develop a certain viewpoint and adopt a certain ideology about which we, in fact, know very little. This may seem rather harmless but when enough of us fall victim of this kind of manipulation disguised as personalization, the collective impact is non-negligible. Harari’s favorite example is elections where many people vote out of emotion instead of rationality. As a result, the same kind of technology that sold us a useless product can also easily sell us a politician. From this point of view, AI, as buggy and unsophisticated as it is today, is indeed fully capable of being influential.

In fact, a buggy piece of technology can be even more harmful than a more mature one. In its most basic term, what a machine learning algorithm does is learning to recognize patterns from its training data. Naturally, if the data is biased, the algorithm will learn to be biased as well. I’ve recently come across this excellent paper where the authors summarized the potential bias in a machine learning model into five categories, among which, representation bias is being talked about the most. Such bias occurs when a certain subset of the population is underrepresented in the training data. As a popular example, machine translation works by training a model on a large corpus of parallel sentences, one for each language (e.g., the source English sentence “I am happy” is mapped to the target Chinese “我很开心”). When enough data is given (and when enough parameters are packed into a model), the model will be able to learn the pattern mappings between the source and the target languages and perform automatic translations for unseen sentences. However, as magical as it sounds, it relies heavily on the input parallel corpus. If, say, the corpus includes more speech from the males than the females for a particular theme, the model will learn to associate the said theme more often with males as well. In this famous example, the author showed that when using Google Translate to translate from Turkish, a gender-neutral language, to English, a gender-specific one, Google assigned genders to pronouns in a non-random fashion and produced such results as “he is a doctor” and “she is a nurse.” (Google has since started to fix the problem.) Yes, it is perhaps still factual that in our society today there are more male doctors than female and more female nurses than male, but do we need AI to amplify the unbalance in a potentially universal scale? If this example seems rather harmless, imagine what would happen if AI starts to automatically add labels to people based on their genders, origins, appearances, and sexual orientations so that before one even had a chance to present oneself, the machine has completed its evaluation and shared the result with the world in a matter of milliseconds.

This is why representation bias is scary - it reinforces the bias that human makes in a snap judgment. If you think about it, a human is like a highly efficient machine that constantly collects data from his/her environment and computes summary statistics behind the scene so that when a new person/object is presented, he/she would be able to easily pull out the relevant features and add them up to produce an aggregate assessment known as the first impression. And by the law of large numbers and the fact that most of us don’t change our environment that much, our snap judgment, made on the people/objects sampled from the same environment, is proved to be right over and over again, which in turn reinforces our bias over and over again. Now imagine an AI algorithm, which collects the same biased data but in a scale that far surpasses yours, is trained to do the same calculation but in a speed that is only a fraction of yours, and shares the result with you before you are able to even reach your own conclusion, what possible consequences can it produce? Based on the recent research that detects one’s sexual orientation and criminality based solely on one’s facial features, I would say the sky is the limit. We’ve all seen science fiction where a machine is used to scan a person’s implanted chip and fetch all the historical data on him/her. Yet, if the abovementioned applications are put to use, we don’t even need any chip. Instead, we can simply scan the person’s face. Oh, you have “less facial hair,” “lighter skin,” meanwhile a particular angle “from nose tip to two mouth corners,” a certain “upper lip curvature,” and a certain distance “between two eye inner corners,” you must be a gay criminal. Researcher Blaise Aguera y Arcas called it the new physiognomy, a once debunked junk science that is, by the interesting turn of events, revived by AI. Even setting aside the issues of judging people only by their appearances, the practice of using technology to make snap judgments on humans is alarming by itself. By consequence, there is no more individuality, no more self. Instead, we are decidedly grouped by a stochastic algorithm into our respective clusters where even true outliers are treated collectively as “others.”

Beyond the level of individual citizens, AI also casts a greater impact on the nations as a whole. One of my favorite discussions was when Harari commented on the famous AI arms race between the US and China and opined how the outcome will impact the rest of the world. On the surface, it appears that China is leading the race in terms of the speed at which we discover new use cases and implement them at scale. Thanks to our massive centralized database, companies who never shy away from collecting any private data, and, most importantly, the support from our government, we never have any problems in pushing AI in the civil domain. On the other hand, facing the public outcry and the pressure from Europe, the US seems to be leading in the ethical discussions of AI and appears to be altogether a more responsible player. From this perspective, for the remaining almost 200 countries in the world, it seems that they will be better off if the democratic US wins the race. However, is it actually the case? Do we still believe that Captain America will save the world? In the era of America First and building a wall, it would be extremely naive to believe that the US, a staunch capitalist at heart, would practice AI with the benefits of the entire world, or even the entire country, in mind.

In fact, whoever wins the race makes no difference. The problem is that there shouldn’t be any race at all. Instead of competing with each other, we should collaborate on a global scale instead. But how? As the speakers pointed out, although the papers are publicly accessible and the code is open sourced, the key ingredient, the data, is locked securely in the database of a few major organizations. Harari dubbed the phenomenon data colonization, a term that sends chills down my spine. How revolutionary and regressive at the same time! In the 21st century, colonialism is still there. What changed is merely the means. Instead of using weapons, money, and cultural brainwash, one uses data. Thank God for big data! Because of that, it is now possible for a data-rich country to understand the citizens of a third-world country better than the latter itself. What’s more, not only was the data collected from the latter but were likely even labeled and processed by the latter, only to be used against the latter in the end. What a division of work! What collaboration!

What can we do?

On an individual level, a piece of advice that Harari has been repeatedly offering is to know yourself because 1) as simple as it sounds, most people don’t, and 2) if you don’t, you are susceptible to being hacked by algorithms that do. Nowadays, it is astonishing how much personal data we are willing to give away in exchange for free products and touted convenience that we in fact never need. Simply hitting “I accept” returns an endless scroll of entertainment that refreshes itself every time we are bored. The transaction is clear: here is all my data, now tell me what I should buy, what I should watch, and what I should think about because I don’t have time myself. What a deal! Yes, the companies are attentively watching us and taking notes, but aren’t we in control after all? What is there to lose? What is lost is our identity. We are no longer ourselves - we are what the companies convince us to be. (Not to mention that, in absence of enough data, personalization defaults to crowd following and we become more and more like one another in the end.) Without truly knowing ourselves, we are not as independent and immune to manipulation as we thought. In fact, we are asking to be (mis)led. When a company comes up with a shiny new gadget and convinces us that, despite the fact that we have, up to this point, managed to live our entire life without it, we suddenly need it so badly that we would go lining up outside the store before it opens, are we still in control? Steve Jobs once famously said, “people don’t know what they want until you show it to them.” Following this logic, I don’t think it is unfair to say that companies have since been successfully profiting from our ignorance of our own selves.

How do we resist? Obviously, one can always choose to disconnect oneself from social media if one wants, right? Not so simple. Thanks to the ubiquity of the medium that expands beyond the scope of our harmless pastimes, it is now a luxury to opt out of social media because how much our lives, both personal and professional, depend on it. I recently came across this podcast about cyberbullying and one victim mentioned that, despite being targeted and harassed repeated by the trolls, she still could not afford to disappear from the platform because her media career depends on it. Ironically, it seems that only those who created the medium have the means to disappear from it while the rest of us obediently hand over our data in exchange for the promised goods that supposedly make the equation balanced.

If we can’t disconnect ourselves from the digital noise, the best we can do is to minimize our exposure. Minimalism is not just about reducing our consumption of physical goods, but also that of information. In the book 21 Lessons for the 21st Century, Harari described his strategy of only reading about a few select subjects that he truly cared about while blocking the rest. I was hesitant to adopt it at first because I had been told repeatedly to stay informed. How can one call oneself a responsible citizen if one does not know enough about current political affairs, social conflicts, and technological advancement? It turned out when I cast my net so wide, I failed to be sufficiently informed about any of them. Instead, I found myself relying more and more on the 140-character summaries of what I was supposed to know. Skimming over everything, I ended up being informed of nothing. Worse, I found myself increasingly relying on my “trusted sources” to tell me how I should feel. If I at least know what’s the generally accepted way to react to a particular subject, I wouldn’t look so ignorant, would I? For a while, it seemed that everyone was telling me to feel angry about everything, so I dutifully did, without questioning since when being informed is linked to being angry. In the end, I was left being angry with myself for not gathering enough evidence to justify this perpetual state of anger, for not thinking critically enough before adopting a conclusion, for following the herd, and for not knowing myself. In trying to be informed, I achieved the opposite. This is when I came across Harari’s recommendation: if you truly care about a subject, be prepared to invest time in doing your own research by reading books - not the click-bait online articles that have been carefully manipulated by the media. When I tried the strategy on myself, the difference is pronounced. When reading short articles online, my opinions tend to be swayed easily because, to keep modern readers’ short attention span (like ours), the substance of the arguments is usually diluted while the eye-catching conclusions are presented front and center. A good book, however, has the luxury (and duty) to explain a subject from inside out with concrete data and extensive references to back a single argument. Surely it’s time-consuming but the alternative is being blissfully misled. The choice is ours.

In the meantime, as researchers and practitioners in this field, we have the responsibility to act with conscience. It is easy to assume that what we do day to day has no profound impact on humanity both now and in the future, and that we are merely low-rank employees doing our jobs and earning a living. In fact, during the intervals of the talk, my mind kept wandering back to my unfinished tasks at work: the bugs that I have yet to fix, the documents that I have yet to draft, and the people that I have yet to figure out how to work with. As ecstatic and liberating to talk about “the big problems” looming in our society, I am not paid to talk or think - I am paid to execute. Before becoming an established writer like Harari who can live off his thoughts, I still have to live off my day job. As much as we talk about these problems, as much as we debate about the solutions, when the lunch break is over, we go back to work. What’s next? Harari mentioned that he wrote his books in order to get people to talk. Now that we did, what do we do next?

When asked the question in the event, Harari factitiously responded by saying that he is not an engineer so he doesn’t know. Amid the laughter, it occurred to me that it is indeed up to us, the practitioners, to fix the problems. It seems to me that, starting from a few years ago, whenever someone poses a similar question to a Silicon Valley tech giant, the default answer from the latter almost always comes back to universal basic income (“UBI”), so much so that I was convinced myself that, once we have such a system in place, the world will be finally at peace. But when? Will it be ready when the first wave of the able-bodied workers suddenly find themselves unemployable? Or more practically, is it realistic to believe that either of the two leaders of the AI arms race today will be generous enough to, not only champion, but also actually implement such a system that will ensure the basic life quality of not only their own citizens but also the foreign workers whose jobs their own inventions take? The fact that we still lack solidarity in fighting climate change should foreshadow the future.

Furthermore, when a company brings up UBI, it almost sounds like a shift of responsibility. Instead of saying “we are going to tackle the problem head-on,” they seem to be saying “look, this is the government’s job, not mine. I’m just doing my own job. You should go complaining to them.” (Or in the famous corporate lingo, “we don’t own this.”) This, in my opinion, is the crux of the problem: the division of work is so finely defined that no one wants to venture out of their own little territory. In fact, why should they? Despite what their missions say, companies are tasked with maximizing their shareholders’ value. Meanwhile, creating a technology that may potentially increase the inequality and social unrest is in no company’s risk disclosure. If the shareholders don’t care, why would the companies themselves do?

During the talk, both speakers repeatedly urged AI researchers to work closely with experts from other domains such as philosophy, sociology, and education, like what Stanford’s own Human-Centered AI department aims to do. As reasonable as it sounds, I wonder, outside the ivory tower of academia, how realistic the idea is. In an industry where everyone is measured by productivity, who has the time to talk to philosophers about the long-term impact of their work on humanity? If, by some stretch of the imagination, such a cross-functional collaboration is implemented, how will it not be perceived as a “blocker”?

If we can’t trust our industry or our organizations to always carry the best intent, we are only left with ourselves. If our companies don’t have time to opine about the meaning of what we do, let’s take time to think about and question it ourselves because, fundamentally, we are not our companies. At the beginning of his book Sapiens, Harari made an argument that companies, like many human-invented artifacts, are fictional and meaningless, and that they only embody meanings because we collectively believe in them. All those cultural values and all those team buildings fundamentally try to enforce this collectivism while, curiously, the companies continue to claim that they believe and embrace diversity. Once separating ourselves from our immediate organizations, the questions start rolling in. What’s the meaning of my job? Am I actually helping people? If so, who am I helping? Am I only helping the rich get richer? If not, am I only helping people like me and my fellow tech workers who huddle with each other in a claustrophobic fishbowl, gradually losing empathy for the outside world? If our product does impact the people from the outside, what kind of impact it is? Aside from a few summary statistics that are subject to interpretation, do we really know it or are we just collectively crafting a story to convince ourselves that we are “making the world a better place”?

With the advancement of AI, I think it’s especially pertinent now to jump out of the fishbowl, even just psychologically, to develop empathy for the people outside because the technology that we are practicing and the data that we are holding have massive implications. During the talk, Harari compared AI to nuclear weapons. Although I don’t agree with the analogy, I do see its value as a wakeup call to all of us, practitioners and researchers alike, that it’s no longer simply about doing our job, playing with the data, or pushing the envelope. Instead, our job does not end after a model is built or a paper is published. We need to think about how our work is going to be adopted and used, and what unintended impact it will make. In responding to criticism, the authors of the paper that uses facial recognition techniques to detect criminality explained that “[they] are merely interested in the distinct possibility of teaching machines to pass the Turing test on the task of duplicating humans in their first impressions (e.g., personality traits, mannerism, demeanor, etc.) of a stranger. The face perception of criminality was expediently (unfortunately to us in hindsight) chosen as an easy test case…” In other words, what’s there to blame? As researchers, they are “merely interested” - isn’t that what researchers supposed to be? The goal of their research is to train machines to duplicate humans’ “first impressions” - if our first impressions are flawed and subject to cognitive biases, why are we training machines to be biased as well? What’s even more striking to me is that they chose the subject of detecting criminality because it was “an easy test case.” How rational! How harmless!

Arguments like “it is not in our intention that our work would be interpreted as such” or “the public attention is overblown and uncalled for” are flawed because it is indeed our responsibility to explain our research, not just to the scientific community, but also to the public. A good point that Harari made in the talk is that scientific communication is getting harder not because we are getting worse at explaining science, but because science itself is getting harder to explain, and the phenomenon is not limited to AI. Nevertheless, that alone does not give us an excuse to make a half-hearted attempt at explaining our work and later blame the public for not understanding it and the media for misinterpreting it. It seems to me that there is fairly strong communitarianism in research where scientists like to identify themselves with a particular community and full-bodily invest themselves in it. I would argue, however, that it is now both inadequate and irresponsible to lock oneself up in a comfort zone, leave the interpretation and implementation of one’s work to the downstream consumers, and throw one’s hands up and play the victim when something goes wrong. The old-fashioned division of work that leaves everyone comfortable and no one accountable no longer works. With any work related to AI, the impact always ends up on humanity.