Audrey Tang, Double Ten Day & The Transcultural Republic of Citizens

Viral Alarm


China Heritage marked the 1st of October National Day of the Chinese People’s Republic series with a chapter in Viral Alarm: China Heritage Annual 2020 consisting of a number of poems. These were prefaced with a quotation from the ancient Book of Documents 尚書, which features the line:

The innocent cry to Heaven.
The odour of such a state is felt on high.


The 10th of October Festival 雙十節, also known as Double Tenth or Double Ten Day, commemorates the start of the Wuchang Uprising of 10 October 1911 which led to the fall of the Qing dynasty and the founding of the Republic of China in 1912. It is celebrated on Taiwan as National Day 國慶日.

We mark this day with the transcript of a conversation between Audrey Tang and Yuval Noah Harari that took place on 30 June 2020. It was hosted by RadicalxChange, a movement inspired by the work of Eric Glen Weyl that is ‘dedicated to reimagining the building blocks of democracy and markets in order to uphold fairness, plurality, and meaningful participation in a rapidly changing world.’


Audrey Tang (唐鳳, 1981-) is an accomplished computer programmer and open-source software hacker who became politically active during the 2014 Sunflower Movement. On 1 October 2016, Tang was named Digital Minister in the government. An early initiative was g0v零時政府, which is aimed at opening up digital accessibility to government operations. Tang played a significant role in the Taiwanese response to the COVID-19 pandemic. In discussing the approach used, Tang quoted Chapter 11 of the Tao-tê Ching 道德經:

‘Hollowed out,
clay makes a pot.
Where the pot’s not
is where it’s useful.…

So the profit in what is
is in the use of what isn’t.’

from ‘How Taiwan’s Unlikely Digital Minister
Hacked the Pandemic’
 WIRED, 23 July 2020

Tang is a post-gender conservative anarchist, a supporter of radical transparency in government and the first transgender, non-binary official to serve in the Taiwanese cabinet. Tang has remarked that their ‘ambitious neighbour’ is a constant source of negative inspiration for what Taiwan should avoid as it works to create an equitable, open-ended and democratic digital future.

They translate the official name of Taiwan — 中華民國 in Chinese — as ‘The Transcultural Republic of Citizens’, have a penchant for the ‘Vulcan salutation’ (accompanied by the benediction ‘Live long and prosper’), and have a habit of closing presentations with a line from the poet/ songwriter Leonard Cohen:

‘There’s a crack in everything,
and that’s how the light gets in.’

— Geremie R. Barmé
Editor, China Heritage
10 October 2020


Related Material:


Audrey Tang 唐鳳. Photograph by John Yuyi for WIRED

To Be or not to Be Hacked?
The Future of Identity,
Work and Democracy

Audrey Tang & Yuval Noah Harari
in Conversation With Puja Ohlhaver


Puja Ohlhaver

Welcome, everyone, to today’s RadicalxChange between Yuval Noah Harari and Audrey Tang. I’m Puja Ohlhaver. I’m very humbled to be part of this conversation. The title of today’s conversation is “To Be or not to Be Hacked? The Future of Identity, Work and Democracy.”

Joining me from Israel is Yuval. He is a gay historian and author of three widely popular books which you’ve probably heard of or read. The first one is Sapiens, which is a history of our distant past. The second one is Homo Deus, which is about our distant future. The most recent book is 21 Lessons for the 21st Century, which is about today.

Also joining me is Audrey Tang from Taiwan. Audrey is the first Digital Minister of Taiwan, also the first transgender member of the Taiwanese cabinet. Audrey is also an artist and hacktivist and also an anarchist.

Given that we’re still in Pride Month [in Euramerica. Taiwan Pride 臺灣同志遊行 is held in October], I’d like to start the conversation today about gender identity and in particular ask both of you about your process of self-discovery in your own gender identities and how that has influenced your view of technology. Yuval, we’ll start with you.

Yuval Noah Harari

The process of realizing that I’m gay and coming out really shaped my attitude not just towards technology, but towards science and history in general. It first made me realize how little people can know about themselves. I came out, I realized that I was gay when only when I was 21.

I look back at the time when I was 15, I was 16, and I just really can’t understand it. It should have been extremely obvious that I’m attracted to boys and not to girls. I think I’m an intelligent person. I should have figured it out, but I just didn’t. It was kind of a split in my mind, that I didn’t know this about myself.

This is why, today, when I look at the development of new surveillance technology, one of the things that most interests me is what happens when somebody out there can know me better than I know myself. I’m quite sure that if Facebook or TikTok or whatever existed when I was a teenager, they could find out I was gay in like two seconds, long, long before I knew that about myself.

What does it mean to live in a world when a corporation or a government can know something so important about me that I don’t know about myself? This is one of the big questions I have today about technology and its impact on politics and society.

Puja Ohlhaver

Audrey, can you tell us about your process?

Audrey Tang

Certainly. I will first acknowledge that Taiwan is one of very few, maybe the only jurisdiction now in the world to hold a physical parade, a couple of days ago and yesterday too, of the gay pride and transgender and LGBTIQ rights march.

In Taiwan, it’s only been one year since we legalized marriage equality, the first in Asia, by adopting very innovative way of legalizing the by-laws but not the in-laws, meaning that in the legal code, we make sure that when two same-sex couples wed, they wed as individuals with all the same rights and duties.

Their families, because in Mandarin we have eight different names for aunts and uncles, these are not affected. This, to me, signifies something that I feel very personally when I was a teenager.

My natural testosterone level is that of maybe a 80-year-old male human being, meaning that I’m somewhere between the average male adolescent’s and average female adolescent’s testosterone level when I was 13 or 14 years old.

I was very lucky at the time to have discovered the World Wide Web, the Internet, and a lot of gender non-binary and gender queer people who informs me that even though I may be alone in my neighborhood of, say, 100 people…

Actually even if this is just 1 in 100, 1 in 1000, that means that there’s millions of us [laughs] on the Internet that can form such a non-binary support group to make sure that our own lived-in experiences can be shared freely.

Later on, when I was 24 years old, I would go through the second puberty, the female puberty, which last for another two or three years. It enables me, as someone who works as a poetician, [laughs] making sure that I understand, empathize, with all the different sides.

Because in my mind there is no half of the world that’s different from me, I can empathize with other people’s lived experiences because I’ve been through the puberty as well.

When we legalized marriage equality, I think it really is an innovation when discovered this intergenerational way of reconciling our different positions – the older generations relying on family and more group values, community values and the younger generation’s more individualistic values.

But in our legal code, we make sure that we respect those traditions in a transcultural way. I often translate the official name of our country as The Transcultural Republic of Citizens, and that also constitutes my main work. 

Puja Ohlhaver

Great, thank you. Audrey, do you worry about the situation which Yuval describes where technology can know ourselves better than we know ourselves and before we know ourselves?

Audrey Tang

A lot of my work is to ensure that social sectors, in RadicalxChange terms, a data coalition or data cooperative owns the means of production – in this case, the production of data.

That is to say if people produce data in a way that is passive, that enables a surveillance state or surveillance capitalism, that will lead to the scenario where Yuval has very much articulated in his worries.

But if the social sector, that is to say, if ordinary citizens can understand that they’re collecting, for example, in Taiwan, the leading contact tracing technology, the winner of our coronavirus Hackathon, Logboard, they collect the whereabouts, their temperatures, their symptoms and so on, but it never transmits anywhere. It keeps it strictly within their phone and not anywhere else.

When the contact tracers, the medical officers come to investigate, it generates a one-time link that has exactly the kind of information that contact tracing needs without divulging any private details about their friends and families, as would often be revealed by a traditional contact tracing interview.

This is just a very simple example, but it shows the autonomous nature of people when they’re owning their own data, when they’re sharing it only with their most intimate and trusted friends and families.

Together, this intersectional data collaborative can prove to be much more powerful than any forced, “Please install this application technology,” that a state or multinational companies can affect on the society.

So, I would argue that Taiwan’s successful counter-pandemic is based on this kind of social sector collaborative that owns the data and does not store it in the “cloud,” but rather only in each other’s personal devices.

Yuval Noah Harari

Maybe if I can say something about that. I definitely don’t believe in technological determinism. I don’t think that the kind of either surveillance capitalism or the surveillance totalitarianism that we see developing in some countries – whether it’s in the US or in China – is an inevitable outcome of the current technological breakthrough. I think it’s a big danger.

The biggest danger really is the rise of a new kind of totalitarianism that we have never seen before in history simply because it’s now technically feasible to follow everybody all the time.
Even in the darkest moments of the 20th century – in Stalin’s Russia or in Mao’s China – it was simply technically impossible to follow everybody all the time and to know them better than you know yourself.

If you need the police or government agent, KGB agent to follow everybody 24 hours a day, you don’t have enough agents. Even if you have all the agents, they just produce paper reports about what you do. Somebody needs to read the reports and analyze them. That’s impossible.

Now it’s becoming feasible, technically, to do that because you don’t need human agents, you have all the sensors and cameras and microphones. You don’t need human analysts, you have AI machine learning and so forth. So, it is becoming a possibility, but it’s not inevitable. I think if we take the right actions – like what’s being done in Taiwan – that can prevent this dystopian scenario from happening.

We saw it in the 20th century that you can use the same technology to build completely different kinds of regimes. You just need to look at South Korea and North Korea. Same people, same geography, same history, same culture using the same technology in a completely different way.

But an even deeper question is, let’s say we succeed in preventing the rise of digital dictatorship, when some government follows us all the time and knows everything about us, what happens if the data is really collected in a responsible and secure way? It serves us and not the government or some big corporation.

Still, the deep, philosophical question is even in this situation, authority is likely to shift away from humans to algorithms in the most important decisions of our lives, like where to study or where to work or whom to marry.

It’s not that I have this online government that forces me to do something. It’s just that I know the algorithm knows me far better than I know myself and can make recommendations for me, and I increasingly just rely on what the algorithm tells me.

It improves all the time. The algorithm doesn’t need to be perfect. It just needs to be better on average than the mean in making these decisions and gradually the authority will shift.

Philosophically, I think this is the really big question of our time. Even if we prevent the dystopian scenario of digital dictatorships, how do we deal with democratic algorithms that serve us, but still know us better than we know ourselves?

Audrey Tang

So maybe we’ll ignore the outline and I will comment on that viewpoint. For the sake of brevity, I’m just going to say “code.” When I say “code,” please think algorithm. Code is having of course the kind of impact as Yuval described it.

Code is like law, but it’s not a law of text. It’s like a law of physics in cyberspace because code determines what can happen, what cannot happen. Technically, not cannot happen, but it takes a lot of effort – like being a professional hacker to make it happen. For most people, it’s just it cannot happen because it’s pre-regulated by the code.

So, it also regulates what is transparent. For example, code can make the state transparent to the citizen, as Taiwan does, or, it can make citizens transparent to the state, as the PRC does and things like that.

Every time that we deploy code as part of our society, it establishes a normativity. That is, it tells us what’s legal, what’s even thinkable by these lines, just like physics. You cannot even think of, “Oh, I’m going to violate a physics law today,” because that’s just not how the world works.

That basically has a very different position than our current, text-based, normativity. When you can do civil disobedience, you willfully occupy the Parliament, as we did in 2014. Then you argue it’s legal and you convince the judges.

So, the impact, as Yuval said, is that whenever we deploy code, we must have the same kind of access to justice, to the same kind of access to the open futures, to the different interpretations that’s either agreed to by the social norm, which would be a positive impact, or it will be set a few people and basically restrict everybody else’s imagination, which would have a negative social impact, even if it is not by one or two actors.

Even if it’s by tens of thousands of programmers, that still is a kind of restriction, and to me, also a negative social impact. 

Yuval Noah Harari

Yeah, I think it’s an extremely important point, this comparison between code and physics. A lot of people today still don’t get the enormous power of coders to actually shape reality. Yes, coders can’t change E=mc², we can’t change that. But social reality is increasingly constructed by these codes.

Start with a very simple thing. In the old days, you’d go to a government minister and you’d need to fill out some forms. Somebody decided that on the form, you have to check male or female, and these are the only two options.

In order to fill out your application, your form, or whatever you have to tick one. Because somebody decided, some functionary decided that the form will have only these two options, this is now your reality.

Audrey Tang

I often tick off both, by the way.

Yuval Noah Harari

Yeah. But then again, in some systems, you can’t just tick both. If it’s paper – that’s a good example because paper, in a way, is still more enabling.

If you’re creative, you get this government form on paper, and you tick both boxes, wonderful. But if it’s on a computer then somebody coded the form in such a way that, no, you can only tick one. Unless you tick, it doesn’t go on to the next screen or whatever, and this is now your reality.

Maybe some 22-year-old guy in California did it without even thinking too deeply that he is making this deep, philosophical, ethical, and political decision that will have an impact on the lives of people all over the world.

Audrey Tang

You can see it when you input the emoji, that is to say, like in the movie, “Arrival,” those very abstract symbols that we all use to communicate now. For a very long time, the people emojis were all male and you had to pass a gender selector for it to look like a woman.

Just in very recent years, like in the past year or two, multinationals and the Unicode Consortium, the standard for the code-makers, started to say, no, the default person, the “joyful laughter,” “joyful tears” face needs to look gender neutral by default. If you want it to look like a boy or like a girl, you have to do additional work, and it must be the same amount of work to make it look like a boy and like a girl.

So I think that is the kind of norm that I’m saying that the code-makers if it doesn’t allow for future interpretations, if the maker of the check boxes doesn’t allow for an other or non-binary choice, which by the way Taiwan provides for people arriving to our airports when you’re doing the health check form, if you don’t design that in then of course you would still rely on people who are civic hackers, meaning that they imagine different civic futures to patch it in.

However, Taiwan is the only jurisdiction in Asia that has the complete freedom of assembly, of speech, and so on, so civic hackers will not be punished unduly. In every other place in Asia, just let like same-sex marriage is not possible, [laughs] this kind of civic hacking can often get people in trouble.

To me, that reflects how much a society is willing to look at its algorithmic code as flexible as its legal code with a due process of change.

Yuval Noah Harari

This issue of, again, the difference between natural law that shapes our life and the rules that we invent, it’s one of the main themes of history. Of course, every culture, every religion claims their rules, their laws are the laws of nature and those who break the law are doing something unnatural.

This is obviously wrong. As you said, if a law is really natural, you simply cannot break it. If some religion comes and says, “For two men to love one another or for two women to get married with one another, this is unnatural,” this is, by definition, wrong.

A real natural law, like you can’t move faster than the speed of light, you simply can’t break it. It’s not up to you. Obviously, biology and physics enable two women to love each other or to have sex with one another. It’s only human code which says, “No, no, no, no, no. This is wrong. We don’t want to allow it.”

The good thing, in a way, about computer code is that in many cases, even though of course computer code has inside it a lot of biases, either programmed intentionally by human engineers or programmed unintentionally, still the good thing about computer code is that in essence it can be corrected much more easily.

If a human being has a bias against, say, gay people or against black people, you can explain to that. You can discover, “Oh, this person or this system, the codes have a bias.” You can explain to people. People can even agree. That will not be enough to change the bias because the bias comes from some place far deeper than our conscious intelligence. It comes from our subconscious.

Now in computer code, you can say computers don’t have a subconscious. If you find where in the code the bias is encoded and you change that, in a way it’s much easier to make a computer code gay-friendly or LGBT-friendly than to make a human being change their biases.

Puja Ohlhaver

This is an interesting point. Audrey, you mentioned in a previous talk in the recent COVID crisis this example of pink masks and how civic technology actually facilitated gender mainstreaming and could support that and actually go deep into our biases. Can you tell us a little bit about that example?


‘Colours are unisex’, 15 April 2020. Source:


Audrey Tang

Certainly. In Taiwan, what we say, the social innovation, our pandemic response system, is based on the three pillars of fast, fair, and fun [快速、 公平與趣味]. The fast part is the collective intelligence system that literally rely on the most ancient communication technology. That is the landline.

Anyone with a telephone, smart or not, can dial 1922, which is a simple landline number – it’s toll-free – and tell whatever they want to tell to the Central Epidemic Command Center, the CECC.

One day in April, there was a young boy who said, “Oh, in our district, because we ration medical mask at the time, when you ration mask, you don’t get to pick the color.” It just so happens that all his rations was in pink color. He was afraid to go to school, saying that, “My classmate might bully me or laugh at me for wearing pink medical mask.”

One of his friends called 1922 to tell the CECC of this problem. The very next day, in our daily livestream press conference where the CECC answers all the journalists’ questions, you can see every medical officer, regardless of gender, start wearing pink medical mask. That immediately gained national popularity. A lot of the avatars of famous people and famous pages gets turned pink. Pink suddenly became the most hip color. Then it teaches about gender mainstreaming. This just made everybody a little bit more transgender, which I think is a good idea. [laughs]

The point here is that if people feel that they have a stake in the norm and just with a simple phone call and regardless of age… That boy probably isn’t of legal age, probably isn’t 18 years old.

Just through this simple phone call and convincing in a very natural manner and appealing to the CECC’s idea of mask for all, if a few boys doesn’t wear mask because it’s pink, then it actually is a public health threat to everybody else as well. Because of that, they took on this gender mainstreaming role very quickly, within 24 hours.

This fast iteration cycle, this agile response, that makes the social sector more strong and more robust because everybody, instead of waiting for the command from the command center, they can actually just participate in the code making.

Basically, wearing a mask is a kind of code. What this code signifies is that it protects myself from my own hands, in Taiwan. I’m taking care of my own health. I’m washing my hands properly. I wear a mask to remind myself that and also remind other people to protect themselves as well.

That idea have a higher R-value than other ideas, for example wear a mask to protect others, to respect others, and so on. ‘Pink medical mask’ just adds to the hip factor of wearing a mask. Altogether, this increase maybe the R-value of ideas, of memes, even more than it was before. The CECC is in charge of amplifying those prosocial ideas. This is what I mean by republic of citizens.

Puja Ohlhaver

It seems like, Audrey, your view is to take technology and use code to help us, assist us, so assistive intelligence. Yuval, correct if I’m wrong. It seems like you worry about code codifying our, say, existing biases. Audrey, your solution seems to be a participatory framework combined with fast iterations. Is that how you would characterize the solution?

Audrey Tang

Yeah, definitely. It also must be fair and also fun, which I will get to later. The fast part, yes, it’s essential.

If the government only responds with those what we call patches, fixes to the system, if we respond only every year or even every four years in case of votes and elections, which is like three bits uploaded every four years, then there’s just not sufficient signal to correct a previously biased or wrong code.

If everybody can very freely fork, that is to say develop alternate visions, and also merge within a 24-hour cycle, then something magical happens. It enable the few civic technologists to become like civil engineers.

Their work will be then used by over half of the population, which makes these code makers the same kind of role as the highway makers, the road makers, and so on but with the additional benefit of everybody being able to imagine different futures.

If it gets rough consensus, that is to say if a lot of people can live with it, then it just turns into the overall new reality for the society in a very rapid fashion, like from pink being sissy to pink being very hip and cool. It’s literally just 24 hours.

Yuval Noah Harari

The main issue for me, again from historical perspective, is that democracy gives authority to the desires and feelings of people. This is the ultimate authority in a democracy. I completely agree that letting people voice their desires, their feelings just once in four years is certainly not enough. It’s not efficient.

The big challenge we are facing and will increasingly face in the 21st century is that now there is the technology to hack human beings and therefore also to increasingly manipulate their desires and emotions.

Of course, throughout history, kings and emperors and prophets and religion, they always tried to get inside people’s minds, understand what’s happening there, and manipulate it. We saw in history mass movements of manipulation, again in like the totalitarian movements of the 20th century.

Ultimately, it was inefficient not only because they didn’t have the technology that I discussed earlier to really follow everybody all the time. Also, the main obstacle was simply the lack of biological knowledge.

Humans didn’t understand human biology, the human brain, well enough to really understand what’s happening there. In the end, humans remained like black boxes. Even somebody like Stalin or Mao or Hitler couldn’t really figure out what’s happening there.

Now, it’s not just the breakthrough in computer science. It’s the same time the breakthrough in the biological sciences that are opening up this black box. They are enabling to, again, hack human beings, understand what’s happening inside, and therefore open completely new ways of manipulation.

Once you have something like that, the ability to manipulate, on scale, the desires and emotions and feelings of millions of people, then simply having faster iteration of feedback is not necessarily enough.

Again, the full ability to hack human beings, it’s still in the future. We are still not there yet, but even what’s been happening in the last few years is alarming. Really, you have all these algorithms and apps and devices that what they are really about is hacking human beings. You have the smartest people in the world working on this problem of how to push our emotional buttons.

You have the big corporation. They say, “Look, people are spending 30 minutes a day on our app, on our device, on our platform. We want them to spend one hour. This is your mission for this year.”

They take the smartest people in the world and give them this task, how to hijack people’s attention and keep them on our platform. These smartest people in the world discovered how to press out emotional buttons, the fear button, the hate button, the greed button. This is the easiest way to grab people’s attention.

Looking to the future, again, the threat of a rising dictatorship, a new kind of dictatorship is a big one. But even if we avoid that, how to deal with the new tools for hacking the human brain, the human mind, that’s the really big question.

Taking the example I began this interview with, if I think about myself when I was, say, 14, and this algorithm analyzes my behavior, analyzes wherever my eyes go. I walk down the beach, and the algorithm analyzes if I focus on cute guys or cute girls. Or, it analyzes what happens to my eyes when I watch videos or television or whatever, and it discovers that I like boys more than girls, and it tells it to or it uses it to manipulate me in some way.

If it’s a bad manipulation like Coca-Cola using this knowledge to sell me something I don’t need – they show me commercials with sexy guys – so I buy their product and I don’t know why, then they are using it against me.

But the really big issue is what if the algorithm isn’t malign, it’s not working in the service of some corporation, and I don’t know this about myself, but the algorithm knows it? There is kind of an imbalance here. What happens then?

I mean, should it tell me that I’m gay? Should it expose me slowly to different contents that will enable me to realize this about myself? What is the proper relationship with this kind of entity?

One more thing. We had this kind of entity throughout history in a way – a mother or father or teacher. My mother is somebody, who when I was 14, maybe she didn’t know I was gay, but she knew a lot of things about me I didn’t realize.

But my mother had my best interests in mind when thinking how to use this information about me, and we have thousands of years of experience in building the kind of beneficial parent-child relationship.

Now we are suddenly creating a completely new kind of entity who actually knows about me far more even than my mother, and we have no cultural or historical traditions about what kind of relationship I have with my AI mentor that has all this information about me.

I don’t want this to sound dystopian or utopian, it’s just fascinating as a historian to think what kind of relationships will emerge out of this new technology.

Audrey Tang

Yes, to this point – actually there were two points. One is the lack of accountability, there was the Coca-Cola example, and one was value alignment, which is all watched over by machines of loving grace. The first point is easier to address.

Taiwan, in our previous presidential election, managed to establish a norm through a completely independent branch of the government called the Control Yuan, or the Control Branch, that makes campaign donations and campaign expenses radically transparent, meaning the raw data is published for independent journalists to analyze.

They’ve been doing this because, we, the civic hacktivists, have been petitioning this, even doing acts of civil disobedience for that.

When we really started doing that, back in the mayoral election in 2018, we discovered that there’s a large chunk missing. The social media advertisements, these were not reported as campaign donations, neither as expenses. Many of them came from outside Taiwan and we don’t know. It really is an unaccountable, black-box.

We read of course the reports about how some foreign powers interfered with some other countries’ elections using high-prepositioned targeting technology, exactly the kind that Yuval described.

It predicts in a micro-prediction way what people’s hidden fears and hopes are. They cater to those fears and hopes and just target this very tiny slice of people, trying to persuade them to not go to vote, or to avoid a certain kind of candidate, or do some kind of emotional manipulation.

We tell all the multinationals, look at our Control Yuan 監察院. This radical transparency is the Taiwanese norm, and you have two choices. You can either publish your real-time advertisement library just as our Control Yuan does in radical transparency, so people engaging in such stark manipulations will be discovered and shamed, or you can just simply not run political and social advertisements during our election session. Your choice.

We did not pass a law for that. We basically just let them know there will be social sanctions if you violate the Control Yuan norm, our election norm. Facebook decided to radically open their ads library, while Google and Twitter and so on just simply refuse to run political advertisements during our election.

So that is a very neat example to the accountability issue, which is more to me a minor issue. The value alignment issue is much larger.

Our mothers, fathers, and community members may offer interpretations, that is to say, their sage advice to us – of course having our own best interests in mind – that are nevertheless colored by their life experiences. Even though those interpretations may be valid, they also, to a growing teenager, forecloses certain other possibilities because that’s the power of interpretation.

To me, I think a way to free ourselves from this value alignment issue is just to have, as a norm, multiple interpretations. Just as you can have many human assistants, each perfectly aligned to you to make accountable explanation if they do some decision not in your best interest, you will have those different human assistants, compare notes. If one of them consistently makes things that are not value-aligned to you, at least you have other assistants to warn you about it.

I think this plurality, instead of a singularity vision is what I have written in my own job description. Instead of user experience, we need to think about human experience. When we think about user experience, I know some other industries that also use that term user. You only care about the time that you spent addicted with that technology when you use the term user. It’s a zero-sum gain of attention and time span.

But if you think of the total human experience, then these different interpretations may add to one another and eventually liberate one self from one, singular vision of one self.


… people don’t realize that in the long run when you talk about algorithms really taking over – not in a science fiction way of the robots rebelling and trying to kill us – it’s algorithms gathering more and more power to themselves. Even if you have a human president or a human prime minister, actually all the important decisions are being taken by an algorithm that the prime minister cannot even understand. …

The funny thing is the dictatorships are actually far more vulnerable to this kind of algorithmic takeover. If you think about the PRC, I often think about maybe if I had time to write a science fiction novel or a science fiction movie about algorithms taking over, my favorite setting would actually be the Communist Party of the PRC.
What happens if the party gives algorithms increasing control over the appointment of lower-ranking officials? Not the people in the Politburo. It’s too political. It’s too complicated. …

Very soon nobody in the Communist Party actually understands why the algorithm is deciding to appoint this person or to advance that person, but they trust the algorithm. …

In a democracy, you need to convince millions of people to trust the algorithm in order for the algorithm to take over. In an authoritarian regime, you just need to convince a handful of people which are already primed to accept this kind of logic, that there is somebody who collects all the information and knows best.


Yuval Noah Harari

I agree that one way to deal with this issue is to have this plurality of actors and viewpoints. Funny enough, when people think about algorithms taking over, they most fear about democracies, and democracies are vulnerable in things like election manipulations.

But people don’t realize that in the long run when you talk about algorithms really taking over – not in a science fiction way of the robots rebelling and trying to kill us – it’s algorithms gathering more and more power to themselves. Even if you have a human president or a human prime minister, actually all the important decisions are being taken by an algorithm that the prime minister cannot even understand.

The algorithm comes to the PM and says, “Look, there is a huge financial crisis about to happen and we must do this. But I can’t explain to you why because your brain just can’t analyze all the data that I have gathered.” So even though the PM is still the official, the one in charge, actually it’s the algorithm running the show. This is increasingly happening.

The funny thing is the dictatorships are actually far more vulnerable to this kind of algorithmic takeover. If you think about the PRC, I often think about maybe if I had time to write a science fiction novel or a science fiction movie about algorithms taking over, my favorite setting would actually be the Communist Party of the PRC.

What happens if the party gives algorithms increasing control over the appointment of lower-ranking officials? Not the people in the Politburo. It’s too political. It’s too complicated.

But let’s say appointing all the officials in the local cities and branches and so forth, it is increasingly done by an algorithm that constantly follows all the 80 million members of the Communist Party and collects data and analyzes it and learns from experience.

Very soon nobody in the Communist Party actually understands why the algorithm is deciding to appoint this person or to advance that person, but they trust the algorithm.

Very soon the algorithm basically takes over the party. Even if one day the Politburo wakes up and says, “Oh, no, it’s gone too far, we’ve lost control,” it’s too late for them because the algorithm has already appointed all the lower-ranking members.

This kind of algorithmic takeover, which I think is far, far more likely than the science fiction scenarios of a robot rebellion, can actually happen far more easily in an authoritarian regime than in a democratic regime. The only ingredient missing is for the people higher up to develop enough trust in the algorithm.

In a democracy, you need to convince millions of people to trust the algorithm in order for the algorithm to take over. In an authoritarian regime, you just need to convince a handful of people which are already primed to accept this kind of logic, that there is somebody who collects all the information and knows best.

So yeah, that’s just one possible scenario. But the really deep problem of value alignment is that even if you have a democracy and you have many players, as the algorithms get to know us better, and as we listen to them in more decisions in life, they increasingly also control our own values. That’s especially true if they accompany us from an early age.

I’m now 44, so my values have been shaped by decades of experience. If an algorithm now increasingly makes decisions for me, the algorithm will still find it difficult to change my core values.

But if you start with a baby, or a young child, and more and more decisions about the life of the child are taken by an AI mentor, again, not an evil mentor that actually serves some corporation, a mentor which is supposedly really serving the interests of that child – it learns on the way, it changes – you trust the algorithm.

You don’t really know where are these decisions coming from. A human can’t go over all the data and understand it, and these decisions shape the values of the child, as she or he is growing up. So again, I don’t have a fixed opinion about it. I don’t think it’s dystopia or utopia. I just think it’s a completely new kind of situation that as a historian fascinates me.

Puja Ohlhaver

One of the ideas in RadicalxChange is data dignity. Audrey, you alluded to it earlier at the start of the conversation, in the architecture. I think one of the things to consider here is how can we architect algorithms so that we have this plurality? And data dignity I think is useful.

The idea of data dignity is that you separate control and use of the data. So when you separate those two, you ultimately are breaking the monopoly and monopsony on the data that big tech and big governments have.

If you do that, and you separate control and use, then you can imagine that there’s going to be lots of different data cooperatives or collectives which can accept or reject algorithms. You can imagine a plurality of algorithms on top of a plurality of these collectives which we choose from.

Audrey, do you think that that’s a compelling vision of the future that can solve some of the problems which Yuval is worried about?

Audrey Tang

The idea of a single mentor presupposes a kind of linear development path. As a junior high school dropout, I have no personal experience with the senior path, and so, I hear people have attended this thing called university.

In any case, what I’m trying to get to is that in Taiwan, our new curriculum, starting last year, advised the children to set their own projects to solve structural problems by problem-based learning.

And teachers…it could be institutional, it could be in the community college, it could be in the local, elderly learning groups and so on, indigenous language circles and so on.

These are the different circles that this child, when they care, for example, about climate change, they can reach out to the various circles interested in that thing instead of relying on the textbook teaching them the truths and thoughts about climate science, which doesn’t make sense unless you have a compelling motivation to understand and solve this problem.

The idea of self-motivational learning is at the core of our new curriculum, and this is after decades of alternative, experimental, home-schooling, all sorts of different education experiments in Taiwan, which are legal, all of them. Up to 10 percent of Taiwanese young people can choose their own curriculum, free of the official one, for the past decade or so.

After we learned from what worked and what didn’t, we decided that this kind of autonomous joining circles that tackle the same problem is the best way to free from individual-to-individual competition, which tends to dominate East Asian education thought.

Once you get trapped into that linear growth then of course you will have the first place, second place, third place as if on the same runners’ track. But if you’re attracted to a systemic problem that you seek to solve, then you basically choose your own course and you win at the starting point.

Then you meet other people who are also forming new constellations from all the different disciplines and all the different cultures in a transcultural way. In this way, everybody you meet will be in a very different culture, and they probably don’t agree with each other about their worldviews.

Their algorithms, if they empower themselves with augmented reality or assisted intelligence, will probably have very different values. Then the child will be able to form, in a sense, their own constellation.

I think this is where the RadicalxChange idea of intersectional data really shines in the sense of one self will define me as a bunch of hashtags. The more dimensions that I explore, of course, the more unique this combination is.

In the end, it’s just a plurality of hashtags that I associate myself to, and therefore, curate the kind of data that’s useful to these different ideas or different values, but all the while remaining true to my own chosen combination of this constellation. That’s at the core of the data dignity idea.

Yuval Noah Harari

I think the idea of an AI mentor doesn’t imply a single trajectory or a particular value system. Just the opposite. It can actually encourage exploration and a wide breadth of interest even more than traditional education systems.

If you think about something like music, let’s say I have a particular musical taste. Now one vision of the algorithmic sidekick or the algorithmic mentor is that the AI learns what I like and just gives me more and more of that and kind of imprisons me in the cocoon or the prison of my own previous biases and opinions.

But the opposite view is that, no, because it knows me so well, it also knows the best way to expose me to new musical tastes. Sometimes when you try too much, then it backfires. So it knows that 10 percent of the music that it gives me would be from genres or traditions that I myself would never think of trying. It can also know the best moment in the day or the week when I’m most open to new experiences.

In the traditional way of school, you go to music class. So music class is every Tuesday, at 11 o’clock. That’s it. This is when you are supposed to be exposed to new kinds of music.

But maybe on Tuesday, at 11 o’clock, I am very tired, or I am concerned about something. This is the worst moment to try and introduce me to jazz, or to Indonesian gamelan music. The AI will know that actually at seven o’clock in the evening, on that particular day, I am much more open, so it will try then.

In this way, hacking human beings does not necessarily mean imprisoning them in their own previous preferences and biases, it can lead to unprecedented variety and exploration. So it can really go in a lot of different ways.

Audrey Tang

I completely agree. When I said linear progression, I merely refer to the singular pronoun that you’re referring the AI with “it” instead of “they,” which also could be singular…

But anyway, what I’m trying to say is that, for example, my personal phone is this feature phone – it doesn’t even have a touchscreen – and so, I don’t get addicted to it. I don’t know about you. [laughs] I find touch screen very addictive and I don’t really like being addicted to the surface.
Being a feature phone, I deliberatively restrict my input bandwidth to this device so that this device probably will never, if I manage my attention well, have the sufficient bits about my preferences to make the kind of exploratory judgments or interpretations that Yuval just described It, which is, in technical terms, a blended volition of my different moments or across my communities and so on.

When it tries to wildly guess my preferences, extrapolating my volition, because my input bits to it is so low, it invariably gets it very wrong, even hilariously wrong and so I will not pay much attention to it.

This is like wearing a medical mask. This protects…I’m not talking about the biological germs and viruses, but I’m using it as an analogy. I use, for example, the Facebook Feed Eradicator which is like a mental medical mask. If you install this plug-in, it removes the Facebook feed from the Facebook app, from the website.

Everything that are autonomous, that is to say, if you intentionally do it, that’s still possible. You can still do search, view live streams or whatever, but all the unpredictable part, those that pushes your emotional or dopamine or whatever buttons, these disappears and replace it with a Zen saying or an Adler saying or whatever saying.

What I’m trying to say is that before the society developed a norm of counter spam, of people flagging things as spam, there’s been things like spam assassin, for lack of a better name, things that you can install by yourself like a personal protective equipment.

Finally, people figure out the norm around spam. Nowadays, we don’t worry about spam mail that much because we understand that our attention is too precious to give to the spammers. It is either a vicious cycle of you giving it more attention and the scammers have more bits to work with, or if you deny them the initial contact and then you protect you and your own community from the ripple effect.

Once the infodemic have a R-value under one, then these bad ideas or malign ideas will not spread. Even if so-called pro-social ideas, if it’s hacking into our automated system, will be kept away because we will have sufficient room to breathe by our own conscious systems, which is the human mind’s moderation system, in theory.

Puja Ohlhaver

I’m going to switch gears a little bit in the conversation and shift over, given the current COVID crisis, to talk about some of the global problems that we’re worried about. Yuval, you have three on your list, AI, climate change, and nuclear weapons. Not sure if you’ve added pandemics onto that list.

One of the remarkable things about this crisis has been Taiwan’s exceptional performance here in terms of suppressing the virus without a lockdown and without community spread. It is a national narrative and a success to Taiwan, but it’s a global problem.

Audrey, my question for you is, what was the narrative of Taiwan’s success? Do you think that it was a shared nationalist identity that pulled the Taiwanese together, or do you think it was a shared participation in solving the problem? What can we extrapolate around the world to replicate the success?

Audrey Tang

There’s actually two crises at the same time. There is the pandemic, which is the biological one. Then there is the anxiety and fear and outrage and conspiracy theories and panic buying that is collectively referred to as the infodemic.

A good analogy is that if those ideas of conspiracy theory, if you do not put out the vaccines of mind, that is to say deliberative, intentional communication materials of basic scientific understanding, then people would suffer actually from a epistemic void because they don’t know really what’s going on. They tend to fill in whatever their mental projection is, which tend to divide people more and makes things even worse.

In Taiwan, in very early on, we establish the “fast, fair, fun” principles, and I talk about fast. Anyone who care enough about asking anything about a counter-pandemic strategy can call 1922 and get their questions answered.

Or the journalists will ask their questions in a much more elaborate way and get the daily 2:00PM briefing understanding. Even that is not sufficient to quell people’s fear, for example, about the lack of protective gears and protective masks. We had a very early-on panic buying of medical mask when it was first distributed in convenient stores and pharmacies.

There was a civic technologist with the name Howard Wu in Tainan City developed this very simple idea. He coded a map on which that he invites his friends and families to report which parts of the city still have the mask in stock. You can see the green ones are the one that still have mask in stock, and the red ones are the ones that have run out of stock.

Just by this very simple gesture, he made sure that people can self-report where can they spend time queuing needlessly and then they can queue fruitfully. He didn’t anticipate this get national press attention, and so he very quickly had to shut the website down, because he owed Google. He used the Google Map API, USD 20K, after just two days, [laughs] and so he had to shut it down.

One of the people using his app was me, [laughs] and so I talked to our Premier, our Prime Minister, and say, “We need to trust citizens with open data.”

This is one of the most interesting thing in Taiwan’s history of building open data and open APIs, in that when we switched to ration the mask through the pharmacies, anyone can use their single-payer national health card, which covers 99.99 percent of Taiwanese population.

They can get those masks and at nearby pharmacy, and that is a machine-to-machine system that publish the stock level of every pharmacy by every 30 seconds. More than one hundred different civic technologists developed maps, chat bots, voice assistants, and so on.

It become essentially a distributor ledger, in which you can only update every 30 seconds, but without any possibility of go back in time to change the numbers. If people queue in line, finally get their three mask per week at a time or nine mask per two weeks later on.

They expect after a couple of minutes to see in their phone that the stock level of that pharmacy would deplete by 9 or 10 if there are child. If it doesn’t deplete, or it rather increases, then they will just call 1922 right there and report this anomaly to the CECC. I spent a time to talk about this in detail, because this captures what’s at the root of the idea of a data collaborative.

It’s everybody holding each other accountable, everybody ensuring that there really is a fair distribution, because they can win this by themselves, and independent analysts can write more dashboards to show there’s a oversupply in certain areas, undersupply in certain areas.

There’s people who work very long hours that they cannot collect at pharmacy, so we have to work with convenience store that open 24 hours a day. It’s already an international narrative.

After we developed this code, which is all open source, by the way, people in Korea used the Taiwan model to convince their government that publishing the number every week or every day at the end of day is not enough. You have to do a real-time API, just like Taiwan did.

The first mask rationing availability map in South Korea was written by Finjon Kiang in Tainan in Taiwan. Even though he doesn’t speak Korean, he speaks JavaScript, which is what’s important here. This enabled a new breed of civic technologists who work like civic engineers, because more than half of population, 10 million people, use their work.

The fairness of all kinds is at its core, and at this moment, there’s more than 90 percent of people who have used our mask rationing system. The remaining 10 maybe they have had already plenty of masks in storage before the pandemic.

They can also use that app to dedicate their uncollected quota to international friends for humanitarian aid. You can see actually 300,000 people’s names in that dedicated more than five million medical masks internationally.

You see, when high-level officials start wearing this mask that prints “Made in Taiwan,” then people approach us and say, “Oh, we also want the blueprint that you can automate the production of two million masks a day in a small, automated factory.”

We also share the blueprints to many other jurisdictions as well. The fairness is definitely not just national, it is really an international perspective. That’s the fair pillar of fast, fair, and fun.

Puja Ohlhaver

Taiwan is also on the climate change, which is another obvious global externality, has also had an interesting social innovation. Audrey, can you maybe tell us a little bit about the distributed sensors operated by citizens and run on a distributed ledger? Is that a solution we can also…?

Audrey Tang

Yeah, definitely. The mask map were able to be prototyped so quickly because there was already another map called the Air Map that’s already in-place. People in Taiwan voluntarily join this distributed ledger and basically dedicated their school, balcony, or whatever to measure the climate.

To measure, for example, air pollution levels and so on, and upload it to the civil IoT system, which is powered by distributed ledger technology.

What this means is that, if you live in a place with some air pollution, and you want to know whether it’s from mobile, a mobile or overseas sources, you reach not for the 100 or so very high-precision weather stations in the country, rather to your primary schools and interesting observations by even high schoolers in their data stewardship classes, using those very cheap, less than 100 US dollars airboxes and connected to the 4G network, which is 16 US dollars per month for unlimited data connection everywhere in Taiwan, because we have broadband as a human right.

All of this enabled this kind of collective intelligence that contributes to climate science. Because all the data are then, at least one copy of the ledger is the National High-Speed Computing Center, the NCHC, which is a top 20 supercomputer in the world. Top 10, if you count the energy, carbon footprint.

In any case, this supercomputer would then be able to take any junior high schooler’s code, and if it’s a better code, to predict the air pollution or to predict the climate model and things like that, it would automatically be able to access the entire civil IoT system without this junior high schooler have to download any data to their personal computer.

This, in a sense, is democratizing even very basic things like climate research and climate science to the citizen scientist as, again, another application of our fairness principle. That’s why we can get the mask map running so quickly.

Puja Ohlhaver

Yuval, does this make you more optimistic about these global challenges that we face?

Yuval Noah Harari

Yeah, I think that with many of these global challenges, the solution has to be global. Of course, it’s rooted, it’s based in individual countries. The most important thing is not to fall into the trap of thinking that there is a contradiction between nationalism and globalism, and that we need to choose.

There is no contradiction. Nationalism is about loving your compatriots, not about taking foreigners. In many situations, like in this pandemic, or with global climate change, if you really love your compatriots, and you want to take care of them, you have to cooperate with foreigners.

To be a good nationalist, you also have to be a globalist. There is no contradiction. I think we are seeing it with initiatives like the one we just heard in Taiwan.

Also, a very important thing is that some people think that to deal with these emergencies, whether the pandemic or global climate change, we need some kind of authoritarian regime that will tell everybody what to do. Otherwise, there is no way to reach a consensus.

The example of Taiwan proves the opposite, and not just Taiwan. In this pandemic, it’s true that authoritarian PRC have dealt with the epidemic better than the democratic USA, but they are not the only examples.

Many of the countries which have dealt with the epidemic the best – whether it’s in East Asia, like Taiwan or South Korea, whether it’s New Zealand, Greece, or Germany – they are democracies, because generally, a well-informed and self-motivated population is far more efficient than an ignorant and policed population.

With a well-informed population in democratic countries, instead of wasting resources on policing the people, you can actually benefit from their initiatives. This is the best way forward.

Puja Ohlhaver

We’re actually coming to the end of our time here, so I think I will just ask a final question. That is narratives for the future. Yuval, you are a medieval historian looking at the future, and Audrey, you are a technologist hacking the present.

How do we develop that new shared story for the future without erasing our unique and individual attributes and differences to solve these global problems? Can we usher in a new renaissance, what’s the narrative of that renaissance? Audrey, I’ll start with you.

Audrey Tang

Certainly. For many people who worry about Taiwan’s future, I will start with the island. People often ask me where is Taiwan going? What are we going as a nation, as a country?

I often say it’s very predictable, the tip of Taiwan, the peak of Taiwan — the Saviah in indigenous language, Patungkuonʉ [八通關], or the Jade Mountain [玉山] — grows every year two centimeters, sometimes three centimeters.

We’re growing towards the sky. We’re growing skyward. That’s a geological answer. Why do the peak of Taiwan grows? That’s because we’re caught between the Eurasian plate on one side and the Philippine Sea plate on the other.

They bump into each other all the time, causing endless earthquakes. Because of that, we learned to make not our own buildings resilient to the earthquakes, but also our ideascape resilient.

In Taiwan, you can get a lot of people – not just academicians, but everyday practitioners – arguing for a PRC-style authoritarian control of the data. You can get people arguing equally strong about a social infrastructure, GDPR-style protecting the person’s best privacy interest against the surveilling state from the European thought point.

Or you can get people arguing – again, very strongly – from the US-based viewpoint of basically an asset, oil, extraction-based idea around data, and so on. All of these ideas coexist in Taiwan.

Just as in the very beginning, when I said we legalized marriage equality by legalizing the by-laws and not the in-laws, we always managed to find innovations that captures the common values out of the different positions. That is the true vision of sustainability, of working for the benefit of homo sapiens seven generations down that line. That’s what matters.

What doesn’t matter is the zero-sum games that people play at this present point using their own viewpoint. Taiwan benefits from those plural viewpoints, each with their own AI sidekicks, I’m sure. [laughs] That actually frees us from this dominating overarching narrative the same way as when we have more than 20 national languages.

I actually think that this Taiwan model is not confined to Taiwan. We can see many similarly minded people that looks past behind zero-sum games in various different ways. RadicalxChange, for example, [laughs] works with the market power but for the social benefit or the other way around. You don’t know. [laughs]

The idea is that it looks past the traditional divides between the false dichotomies and see them rather as different dimensions that you can develop on both dimensions and reach a higher plane of existence, if you will. That is also humanity’s future. We would benefit from the plurality of civilizations and indeed grow skyward.

Yuval Noah Harari

I would say that humans are storytelling animals. We rule this planet because we are the only animal, as far as we know, that can create imaginary stories and believe them. This is the key for cooperation among humans.

We cooperate because we believe in imaginary stories about gods and nations and money even though these things exist only in our own imagination, even only in our own mind. This is not bad. This is the bedrock of almost everything we do.

Money, obviously, has no objective value. It’s only here that money has value, in contrast to, say, a banana that has an objective value. I can eat it. It sustains me. It’s not bad. Without money, we couldn’t have trade networks like we have today.

The key thing is to create stories that serve us without being enslaved by them. The danger that humans constantly face is that they come up with some big story to help organize society. Then they forget it’s just a story we invented. They get trapped in it. They start harming themselves or others in the name of the story.

If you think about something simple, like a game, like football, obviously, we invented football. It’s fun. It’s nothing bad about it. If you start beating up or killing people because you lost the game, then that’s a problem.

It’s the same when we look to the future. We need to create new stories to unite humankind, but we have to be extremely careful to remember that it’s all done in order to alleviate suffering.

I would say that the test of reality… Reality is still there. Behind all the codes and all the stories, reality is still there. I would define reality by suffering. If you want to know whether something is real or not, whether the hero of your story…

You believe in the nation or in some god or in a corporation or whatever. You want to know if it’s real, ask whether it can suffer. A nation cannot suffer. Money can’t suffer. When the dollar loses its value, it doesn’t suffer. Computers, too. Code, as far as we know, doesn’t suffer.

Whatever story we create in the 21st century in order to deal with the new challenges, we should constantly ask ourselves this question. Who actually suffers? Remember that everything we do is in order to alleviate that suffering. Then we are on safe ground.

Audrey Tang

That’s a very powerful way to interpret, which is very enlightening. As an oyster vegan, I would not go into a debate of whether oysters are real or not, [laughs] of whether they can suffer or not. What makes sense really is to empower the people – by people, I mean any being that can suffer – to empower people closest to the suffering.

If we keep coding to empower people who are closest to the pain, who are indeed suffering, then I would argue that they then become hackers in the civic hacking sense, that they cannot be restrained by their biology, because it’s Pride Month after all, or restrained [laughs] by their social standing or even other old stories that people merely repeat but do not co-create.

Then being liberated from those old stories, they become story weavers that can then determine a better destiny for everyone of the sapien-kind, if that’s a word. If we concentrate power to the people who are feeling the least suffering, people who already enjoy too much hedonistic lifestyles, then we are in real danger.

Even though hedonism is not zero sum, it tends to self-reinforce itself into a self-trapping cycle. I would also say that to hack or to be hacked is not a question that is an individual level. Rather, it’s on a society level, and we can keep looking at, just like Gini index, we can look at the code weaver, story weaver’s index of how much individuals who are closest to the pain and suffering can co-create the norm and the code that we’re living by.

Yuval Noah Harari

I fully support that. That’s a very, very good way of putting that.

Puja Ohlhaver

Wonderful. Thank you. Thank you, Yuval. Thank you, Audrey. Audrey, you have this beautiful quote which is your own story. Would you mind sharing that? On the singularity is near but… I won’t say it. I’ll let you.

Audrey Tang

Sure. It’s my job description, actually. Three and a half years ago, when I first become digital minister, people often confuse digital with IT, information technology, or ICT, plus communication technology. I keep telling people technology is talking to machines. Digital is about forming a new possibility in societies.

It’s hard to distinguish those two, so I wrote a poem, or a prayer really, as my job description. I will read that at Puja’s [laughs] request. It goes like this.

When we see the Internet of things, let’s make it an Internet of beings.
When we see virtual reality, let’s make it a shared reality.
When we see machine learning, let’s make it collaborative learning.
When we see user experience, let’s make it about human experience.

Whenever we hear the singularity is near, let us always remember the plurality is here.

Puja Ohlhaver

Very beautiful. Thank you very much. Thank you, Audrey. Thank you, Yuval. I hope this…

Yuval Noah Harari

Thank you.

Puja Ohlhaver

…conversation has been enlightening for both of you. It’s certainly been a honor and very humbling experience for me. I wish you both a wonderful end to Pride Month.

Audrey Tang

Live long. Prosper. [laughter]

Puja Ohlhaver

Thank you.