Cutting Through The Digital Noise in Ukraine

Cutting Through The Digital Noise in Ukraine

By Keith McManamen

The #HackingConflict #Diplohack brought together six interdisciplinary teams to address the challenge: how can new technologies be leveraged to empower nonstate actors and civil society groups in zones of conflict? The two-day event drove teams to work intensively to find innovative solutions that would disrupt conflict in the areas of three different UN Security Council resolutions.

Our team selected the challenge covering UNSC Resolution 2202, which addresses the Ukraine ceasefire and the Minsk Accord. Our group consisted of researchers, scholars, programmers, data analysts, and practitioners. Each person brought a unique perspective and skill set to the project, and over two days we worked to design an evidence-based, feasible solution that addressed a fundamental problem which jeopardizes the ceasefire agreement: the lack of accurate, credible reporting from the front lines, making it impossible to determine whether and where the terms are being upheld or violated. The causes of this are:

  • A lack of journalists at the front lines and in conflict areas;
  • Intimidation and threats to safety of citizen journalists; and
  • Aggressive campaigns of propaganda and disinformation, in print media and especially online.

Yevhen Fedchenko, founder of StopFake, spoke to me from Ukraine, and said: “the Ukrainian discourse is dominated by trolls and botnets, and even when critical events are happening, the discussion is overpopulated which causes the real message to be lost.”


From left to right:  Stanislav Budnitsky, Keith McManamen, and George Stairs. Photo by Jessica Pichard

Initially, our team sketched out an entirely new content platform that would provide a clear and open channel through which credible information from the front lines could be disseminated securely, anonymously, and in a way where citizen journalists could report ground truths without compromising their personal safety. However, from our interviews with civil society actors in Ukraine, and social media analysis of major networks such as Twitter and Facebook, it grew apparent that by far the biggest challenge was not the reluctance of individuals to transmit information to others, but the near impossibility of being heard against the deluge of misinformation. The existing platforms for disseminating content were being used, fearlessly, but still these voices were being drowned out. One cleverly-concealed botnet consisted of over 19,000 pro-Kremlin Twitter accounts, which transmitted 141,000 tweets in just one week. Against the staggering volume of fake news stories propagated by botnets and the so-called Savushkina trolls, the veracity of content is no longer discernible, and the embedded metrics for which content is important — favourites, likes, retweets, shares — begin to break down.

So the original idea was stripped down into a tailored solution to empower the ordinary citizen, whether Russian or Ukrainian, journalists, media organizations, and other stakeholders to accurately identify which content was credible and figure out which content is important to them. A complex algorithm which assigns each piece of content a rating based on its credibility, and can identify and demote content produced by bots or trolls, we called our software HighPass, after the audio noise filter. This credibility ranking would be based on several factors:

  • Was the author a bot or a troll?
  • Did the author have a real location, and was it geographically-pertinent to their content?
  • Which other users had shared this content?
  • What was their location relative to the original poster? and
  • Did the content promote dissident opinion? Did it have the power to affect an opposing point of view?

Click the logo to see the HIGHPASS code

HighPass was a “hack” project in the truest sense. The program enhances PageRank, Google’s popularity algorithm, with learn2ban, a machine-learning toolkit which helps identify bots from real people. In addition to network analysis, we also found an algorithm that could identify trolls on the basis of content, developed by researchers from Stanford and Cornell in 2015, which demonstrated the ability to parse comments and accurately identify “antisocial behaviour” (trolling) within just 10 posts. And the concept of voting on content to determine popularity is used on many popular websites such as Digg, Reddit, and Hackernews. When we ran the prototype against a Twitter dataset provided by SecDev, it selected a top-ranked tweet — photographic evidence of Russian insignia being worn by soldiers in Eastern Ukraine.

As was surely experienced within the other teams, I found the hackathon setting uniquely bridged divides between disciplinary silos. Academics, practitioners, private sector researchers and analysts, journalists, programmers, and policymakers rarely encounter fora where they can all talk amongst one another and brainstorm solutions to the problems they wrestle to solve within their respective careers. Moreover, it is a most singular opportunity to scatter these individuals across multidisciplinary teams where they must innovate rapidly in a competitive setting.

As the technological sphere remains a dynamic and rapidly-evolving frontier in the world of conflict and international politics, events which challenge decision-makers to out-of-the-box thinking and collaborating with experts from different fields and backgrounds are more called-for than ever to drive innovation and disrupt conflict. It goes without saying that relying on trite, templated solutions is a disservice to those beset by conflict who are depending on us.

Keith McManamen works as a Strategic Analyst at Psiphon Inc. He collects and analyzes data on information controls, networks, and circumvention technology usage.