Fake news, Fake People Auto-detection Is Now Reality

Fake news, Fake People Auto-detection Is Now Reality

By Diane Israel

It’s about time. For many, say around 40 percent of the U.S. voting population, there are but two expressions of news. FoxNews and everyone else, with much derision, referred to by president Trump and many others as, Fake News. Similarly, the Robert Mueller III investigation into, among other things, Russian meddling in the 2016 presidential election, there is no longer any doubt that the Russians did indeed meddle, and meddle big time, creating Fake social personas, pages, events and Facebook advertising to create as much political division as possible with the hopes of a prescient Trump victory.

To counteract at least some of this mayhem, Israeli and American researchers have developed a method to detect fake user accounts on Twitter, Facebook and many other social networks.

Although not exactly a product, but an algorithm which can detect unusual link building that does not unfold in the way a human would do, it automatically flags these improbable links.

“With recent disturbing news about failures to safeguard user privacy, and targeted use of social media by Russia to influence elections, rooting out fake users has never been of greater importance,” said Dima Kagan, lead researcher and a PhD student in BGU’s department of software and information systems engineering.

The algorithm consists of two main iterations based on machine-learning algorithms.

  1. The first constructs a link prediction classifier that can estimate, with high accuracy, the probability of a link existing between two users.
  2. The second iteration generates a new set of meta-features based on the features created by the link prediction classifier.

The algorithm can also be used to reveal the influential people in social networks, but how it does it is still a mystery that compels additional scrutiny.

The irony is that human deceipt and meddling may have met its match, and not by any populist pushback, but rather, through an ethical algorithm. Perhaps the time has come whereby what’s wrong with human behavior can be best countermeasured by ethically-minded data mining. Of course, countermeasures to the countermeasure should be expected to follow suit. And maybe this is the future of artificial intelligence with automation serving as proxies to our moral dilemmas.