As I have been working on practical applied stuff for several years (and will want to get back to that in some time soon), I had decided that I would approach this thesis from a rather theoretical frame. I am hoping the applied product/art piece etc will arises from thinking about what I am interested in conceptually. Broadly I know that I am interested in digitization/digital transformation etc, whatever you want to call it. I have been wondering over the last few weeks if Human Rights is a good lens from which to start thinking about these issues. I have been exploring this idea with several snippets of writing. These snippets are not fully formed nor coherent, rather they are attempts for me to get some ideas out of my head and more developed.

One question that interests me is the consequences of bringing another 4 billion people online over the next 10 to 15 years. A lot of times this is framed as “bridging the digital divide”. Bridging this divide actually something I am keen for my work to contribute to, at the same time, the framing of the problem in that way feels like it has something missing. For one, it seems to focus the work as a rather technical one (we get cheaper broadband, we get cheaper devices, we get more people digitally literate) rather than as a political choice. So I wrote this little snippet to explore what it might mean to see this is as a political choice:

The intrusion of the digital to the everyday appears to be inexorable and inevitable. Just over half of the world’s population has now used the internet, and an even larger amount is covered i.e. within zones where the infrastructure to access it exists. Large scale access has created modes of communication, systems of governance and cultures that originate in the digital, but are increasingly merging with our “real” lives. A coherent digital lifeworld that collapses the real and the virtual into one is likely to appear in the near future, if it has not already happened. 

The digital – in the form of mobile coverage, financial software for microlenders etc – has reached the lives of even those at the periphery and margins globally. The integration of such citizens and communities into the digital lifeword is a process now backed by large institutions, companies and governments.  Digital Transformation i.e. the process of bringing heithro offline services and their currently “unconnected” (or marginally connected) citizens into the digital is now the stated goal of many international agencies and developing world governments. This goal can be found embedded into strategy documents and is even codified in the Sustainable Development Goals, a set of development targets agreed to by 185 countries as well as a large number of civil society groups.

Digital Transformation plans, such as the Digital Nepal Framework, often see their work apolitically i.e. as a means to enable citizens to access services more efficiently or to connect people to markets boosting the economy. By now, however, we are aware that digital is more than a neutral technology that provides services without social consequences. The digital lifeworld has its own power dynamics (those who control data control wealth), it’s own modes of communication, and <something else>. Thus, the “digital transformation” of a farmers’ livelihood by connecting them to an ecommerce platform, by virtue of enabling them to access a broader swatch of the digital, is also an act of conferring on them citizenship in the digital lifeworld. What will be their role in this new community they have been made part of, what social orders and power structures have they been integrated into? And how will their integration in the digital lifeworld impact their socioeconomic position in the prevailing “real” world? 

Human Rights, while not always realized in practice – is a set of treaties, conventions, laws etc that carry some moral weight in the world – and thus provide a possible foundation for thinking about what our newly naturalized digital citizens can expect, do and be safe from in their new realm. Using a human rights lens to examine the rapid digitization of the Global South could be a practical but adequate way to ensure that techno-optimism is not permitted to overrun good sense. 

For example, in this framework digital literacy can be seen as an extension of a series of historical efforts that have secured a right to education. Much in the way functional literacy has been shown to empower the literate to navigate their lifeworld with greater success: <include increased income, less child marriage, etc etc> Digital literacy can both empower people to leverage the increasing intrusion of the digital into their lives for their own benefit, as well as give them the tools to negotiate the level of intrusion they are comfortable with. Just as literacy increases the ability of citizens to engage in their own governance, digital literacy is necessary to legitimize governance of the digital lifeworld. 

Anchoring arguments about the need for digital literacy (as well as literacy efforts in the digital lifeworld) to prevailing rights of education allows demand that these literacy follow the parameters prevailing now i.e. they be inclusive of various abilities, that they be affordable for large swaths of people to engage with and be simple to access. Similarly prevailing and recognized rights in health, finance, transportation etc could be extended into the digital lifeworld. With new rights created only when the prevailing framework is found to be insufficient.  

A number of factors recommend using a human rights lense to influence the process. First, both governments and other powerful institutions in the developing world are usually familiar with the discourse around rights because of their participation in global forums. Second, there are a plethora of civil society organisations that have long advocated for their causes through this framework can be engaged to add the digital as one part of their overall thinking. Third, governments and other bodies have committed to various international human rights treaties and agreements, which can be used as both legal and moral leverage in discussions with them. Finally, it makes sense with the conceptual framework that a previously separate digital realm is collapsing into the existing real world to create a digital lifeworld. In such an event, the efforts of specialists in securing the education, health or civic rights of citizens extend or alter, as needed, their prevailing practices in light of the integration of the digital into the extant lifeworld. 

While the snippet above lays out a framework for thinking, the consequences of which are still not clear to me, and hopefully will be discovered through this thesis exploration process, I am also thinking a bit more in terms of the application. Specifically, I have been feeling like the role of human actors and institutions in leveraging technology often gets obscured in the conversations about new technology-induced social changes i.e. that about disinformation. Here is a snippet from a (very slow moving) primer for Nepali journalists that I have on my to-do list… I was inspired to go back to thinking and tinkering with it by this thesis class:

In this primer, we take a social shaping perspective of technology. We encourage readers to pay particular attention to the idea of affordances, as we will refer to this notion repeatedly in this primer. An affordance, simply put, is what the design of a technology or object makes it easy to do. For example, the design of a chair – both by it’s height and also its wide seat – makes it easy for us to sit on it. Thus, being seated is an affordance of a chair. However, as we have occasionally observed in parliament, a chair can also be used to express political disapproval by smashing objects. This use of a chair – as a political weapon – is what is sometimes called an “hidden” affordance i.e. while it an affordance that may not immediately be obvious, and may not even be intended by its designers, it is discovered by the users. 

Expanding beyond a simple chair, let’s look at a more complex technology: petrochemicals. The affordance of crude oil based energy sources – petrol, diesel, kerosene etc – are that they can release a relatively high degree of energy per unit and that as a liquid it is easy to unitize and transport them. These affordances made them a valuable source of fuel for a large number of applications in the 20th century. The previously dominant energy source, coal, while being perfectly adequate for some uses did not have the affordances that made it useful to fuel personal transportation vehicles. 

The digital media landscape does have specific affordance i.e. actions that could not be performed as cheaply and simply without the technology. Targeting one’s audience with exacting precision, for example, would not be possible without many recent digital technologies that are able to build our behavioral and psychological profiles by analysing our online activities. Nor would it be possible to create and distribute realistic looking fake images and videos without the capabilities of digital editing tools. Doxxing, cyberbullying and other forms of personal attacks on social media would not be possible if we were not connected through platforms that have like/share/retweet buttons that make it easy to amplify content. 

Nonetheless we do not find it credible that these affordances alone are the causes of disinformation, cyberbullying etc. These affordances also require human actors that utilize them before they can have a social impact. Automobiles, for example, permit us to travel and move goods greater distances and quicker speeds. But this affordance is meaningless unless human actors make investments in roads, parking and petrol pumps. In cities where the local governments and urban planners invest heavily in track based transportation – metros, light rails and intra city trains – at the expense of roads and parking spots one may find that the affordances of automobiles are relatively meaningless. The recent taxation issues around electric cars in Nepal have clearly shown that forces beyond the technology – taxation in this case – determine if the affordances of the technology are actually utilized. 

Similarly, looking from this perspective, we feel that the affordances of the digital media landscape cannot be directly held to account for their ills. We have to also look at the ways in which human actors utilize these affordances and their interests in doing so.  Effective disinformation campaigns, for example, often have sophisticated means of coordination between multiple people that are embedded in a social context and thus understand what kind of messaging will be effective. Most large scale disinformation campaigns in recent times have been run by institutions – states, state supported entities, political campaigns/activists – to further their agendas. Similarly the issues of cyberbullying surrounding the “Nikisha Tik Tok” case are just as much caused by the affordances of social media as they are by how we process gender norms as a society.  

Seen from this perspective, it becomes apparent that our response to the social consequences of the digital media landscape must be both technological and sociopolitical. 

Amplification-for-Polarization and Disinformation 

Perhaps the best way to start this exploration is to dig into amplification-for-polarization and disinformation is by examining why and how it is deployed. 

We broadly categorize these motivations into those related to the narrative and those related to gain. Rather paradoxically, these techniques can be deployed to both shape a narrative or confuse a narrative. A subset of this motive is the use of disinformation to both build as well as destroy trust in critical institutions. These techniques can be and often are deployed for financial gain or social recognition also.

The Guardian’s analysis of fake news spread via WhatsApp during Brazil’s 2018 presidential elections found that a disproportionately high amount of fake news was spread about eventual winner Jair M. Bolsonaro. Most of the news spread cast him in a positive light and slandered his opponents. While disinformation can be used in this way i.e. to generate positive stories and create favorable narratives this is generally not how it is used. More often disinformation is in fact used to confuse the narrative. Peter Pomerantsev, in This Is Not Propaganda, argues that Russia, the most sophisticated state users of these tactics, learned from the Soviet Union that favorable narratives that diverge from the truth are unsustainable over time. 

Thus rather than investing in building up and attempting to sustain positive narratives Russian tactics are more focused on making the current regime seem more sensible than the opposition. The Russian strategy is focused on the ‘four D’s –  dismiss the critic, distort the facts, distract from the main issue, and dismay the audience.’ Online channels of Russia Today, spawns so much confusion and chaos that its audience simply tunes out all new, argues Pomerantsev. In this tactic, headlines that might be harmful to Russia are drowned out by spinning up more attention-grabbing headlines. The affordances of digital media technologies mean these information operations that took years to execute in the Soviet era now take a mere couple of hours through the web

A more potent version of the tactic of confusion might be attacking the credibility of institutions. The 2014 Scottish referendum is thought to be the first time in its post-Soviet era that Russia attempted to influence the results of elections abroad. First they attempted to influence the results then after the referendum results, Russian state-backed media such as RT and Sputnik observed that the results were ‘irregular’. In 2017, observers noted Russian interference in the seperatist crisis in Catalonia, as well as 2016 Brexit vote. Public assessments of these incidents often involve the fear that public opinion was being swayed through Russian media and sock-puppets on social media that stoked separatist sentiments. However, a deeper analysis reveals an additional fear. Whether or not these influence campaigns had the intended effect may be less important than the ability to cast doubt on the validity of these democratic processes. It is likely the outcome intended by Russian operations is not Brexit or Catalan independence as much as it is to create a narrative that discredit elections and thus the legitimacy of democratic institutions. 

Below we note a few cases that highlight the motivations and tactics shared above.   

Examples of amplification 
The 50c army: Shaping a positive narrative

Anyone who has read a news story about China on a major global media outlet must have noticed the overwhelming amount of comments many such stories garner. Scholars have suggested China has a ‘50c Party’ or ‘50c Army’, a group of hired commentators who attempt to shape the direction of online conversation by posting on media both at home and abroad. A leaked government memo explained a job of these cybertroopers is to ‘promote unity and stability through positive publicity’ of the CCP. This 50c army reportedly has 2 million members all charged with spreading positive publicity about the party, acting as ‘cheerleaders’ of the CCP. 

Downing of MH17: Confusing the narrative

In July 2014, MH-17, a Malaysian airlines plane flying over Ukrainian territory controlled by Russian proxies, was shot down by a Russian anti-aircraft gun. To sow confusion about who was responsible for the crash, Russian information operations put out conflicting narratives – some narratives blamed the Ukrainians who supposedly thought it was Putin’s private jet, others suggested it was a false flag operation and dead bodies had been put on the plane in advance, yet others suggested the plane was taken down by Ukrainian jets, and so on. By sowing confusion Russian authorities were able to deflect the blame in the heat of the moment. By the time a credible investigation was conducted and conclusive evidence was released years later, the issue had lost much of the immediate pressure it would have generated on Russia.   

Macedonian Fake News: Profiting from disinformation

In the lead up to the 2016 US Presidential elections, young Macedonians running content farms were earning millions of dollars by publishing blatantly false content with high viral potential. The best example of this was the headline “Pope Francis Shocks World, Endorses Donald Trump for President,” a Macedonia manufactured lie before it spread like wildfire on US media. These youth had no motivations or regard for the political outcomes of their actions, but were only in it for the money.

 

Media Manipulation tactics are increasingly adopted by national-states and political parties – a University of Oxford study found that such campaign were employed in 70 countries in 2019 – they have been found to be very effective in fragmented and polarized societies.  

It should be of cause for concern in the Nepali context that both major powers with a deep interest in the country – India and China – are among the 7 “sophisticated state actors” employing media manipulation tactics. The EU Disinfo Lab India Chronicles report outlines a 15 year campaign – involving spoofing the account of dead professors, registering over 550 domains for fake media companies, creating over 750 fake social media accounts, and even creating fake NGO – undertaken Indian actors to influence the opinions of political elite.  Also cause for concern is that we have seen these media manipulation tactics being used by political factions to gain an edge in intra-party factional disputes, suggesting they will be more widely deployed in the next election, leading to increased political and elite polarization. 

At the same time it is also a matter of hope that media manipulation techniques in Nepal are rather blunt and lack the sophistication needed for extensive amplification. Furthermore the reach of digital media is still not extensive enough for these tactics to be fully deployed. Thus there is still time to begin a public conversation and possibly find some solutions before domestic and international actors employ extensive manipulation tactics in Nepal. If political actors in Nepal can be encouraged to restrain their action and avoid deep political fragmentation, it will present a more challenging environment for international actors to manipulate. Additionally, civil society should be mindful of other social cleavages along which effective deployment of amplification-for-polarization and disinformation, including religious, geographical, but especially along Nepal’s existing ethnolinguistic faultlines.

Partly my interest in human rights might be arising from seeing that human institutions are so important relationships that will evolve in the digital lifeworld, and instead of treating the effort of thinking about these emerging relationships as a “new paradigm” or “needing innovation”. Can we find frameworks that have already tackled these kinds of concerns… and might be robust enough to be extended?

It’s not that I have given no through to the applied end-product of this process.

I have been thinking that representing some of the issues of digitization as fables or folk stories might be fun. This though is partially inspired by this very engaging bit of speculative fiction by Sara Watson. I have sometimes used this story to prime people I am working with to consider issues of algorithmic governance. I also spent a little bit of time wondering what kinds of a story I might write to explore the idea of a digital lifeworld that merges the “real” and the digital. I have some notes sketched out for a story called “How Dhurbay’s Dai Shadow Came Alive”. Modeled along with traditional folk tales, it thinks of a digital footprint (first acquired when Dhurbay a farmer in the hills turns on a feature phone) as a shadow that increasingly becomes a real person with its own thoughts, feelings and relationships as he upgrades his phone and become more “connected”. I am not sure how it will end, but perhaps with the shadow independently taking a loan… and everyone being confused if the real Dhurbay is liable for a loan his shadow took!

I had also been looking at IDEO.org’s tool kit for social enterprises as the kind of things I might build for NGO’s to use to think about digitization and rights. The tools from Google’s Next Billion Users that was shared in the Design for Change class was also sorta cool. It gave me a lot of ideas.

Finally, I have been quite interested in seeing if a very successful methodology for teaching foundational literacy and numeracy called Teaching at the Right Level (TaRL) could be adopted to help people rapidly acquire digital literacy skills. To consider what Digital Literacy Skills might be, I have been looking towards UNESCO’s Global Digital Literacy Framework.