There was a sense of déjà vu when Facebook’s chief operating officer Sheryl Sandberg and Twitter chief executive Jack Dorsey testified before the US Senate last week; Facebook founder and chief executive Mark Zuckerberg had been hauled before Congress in April to explain data breaches at his company.
This time the executives had to address how they would prevent Russia and others from interfering in the US mid-tem elections in November. They were also grilled about what their companies did to exclude criminal networks (human trafficking and drug dealing) from their platforms.
These appearances came hot on the heels of US President Donald Trump voicing his outrage about how he and his Republican agenda were treated by Google. He felt that he featured neither prominently nor positively enough in Google search results.
There is good reason for the focus on new media. We should look at three themes, which are important to us all.
The first one is privacy and how we can protect our personal data. The second is how news is reported in new media. The third is how to stop interference by foreign governments and non-state actors in democratic processes, and how to prevent people such as human traffickers and drug dealers from using new media as a means to their their criminal ends. The common denominator underlying all these themes is the role of algorithms and artificial intelligence (AI). This is precisely where things become difficult, because these phenomena of technology have an ever-increasing influence on our daily lives, which we — and at times the tech giants themselves — fail to sufficiently understand.
The common denominator of the technology companies’ woes is, perversely, technology itself; how much are we as users willing to delegate to artificial intelligence and the algorithms underlying it?
When it comes to personal data, is clear that the big tech companies have an obligation to protect our identities, but that is more easily said than done. They are businesses with shareholders, which means they need to generate profit; as long as users can post their texts and photos free of charge, the money has to come from somewhere. This is where the advertisers come in. Clever algorithms match users with products they may be interested in, which is a slippery slope. New-media platform owners have an obligation to explain to users that as long as they do not pay to use these networks, they are not customers — they are the product. It is an inescapable fact that products are bought and sold in any marketplace, but that does not absolve tech companies from doing their utmost to protect users’ personal data.
Trump’s pet peeve about how he is portrayed on Google is again underpinned by algorithms, which determine Google search results. When it comes to news rankings, Trump may actually have a point. Anyone can post pretty much anything they want on Twitter or Facebook, but where it appears in Google search results is then determined by our friend, the anonymous algorithm. Few people know exactly how these algorithms work, but broadly, Google ranks articles according to how often users link to them. It therefore follows that established and trusted sources such as The New York Times, the Washington Post and, in the UK, The Guardian, have an inbuilt advantage — and none is particularly noted for being a Trump cheerleader.
In addition, traditional media such as newspapers or television stations employ editors and armies of fact checkers to ascertain the veracity of an article or a package. When that “human lens” is replaced by anonymous AI and algorithms, we lose context, and with it objectivity. (By the way, in my experience Arab News has one of the best editorial and fact-checking departments; for instance, no statistic is published without the writer submitting its source.)
Lastly we are dealing with foreign state and non-state actors trying to influence elections and the political process. In her testimony, Sandberg admitted that Facebook had ignored this problem for too long. Here again it is difficult for the tech companies to control the political ramifications of what is posted as long as algorithms and AI do the checking. They would need armies of human checkers of sources and facts to deal effectively with this issue. Facebook and the others are going down that route and it will have an impact on their bottom line, which Wall Street will not like.
The common denominator of the technology companies’ woes is, perversely, technology itself; how much are we as users willing to delegate to artificial intelligence and the algorithms underlying it? This is where tech companies and academia have to do a lot more to understand how these factors will influence our future. They also have to engage with regulators, politicians and the general public to help them become comfortable with this brave new world.
Prof. Jim Al-Khalili, incoming president of the British Science Association, declared recently that AI was bigger that all other issues facing humanity. While he may perhaps have erred on the side of hyperbole, it is certain that we need to understand better how AI, algorithms, blockchain, etc. will influence our lives. This goes beyond the scope of what tech companies can do into a broader debate reaching all corners of society.
- Cornelia Meyer is a business consultant, macro-economist and energy expert. Twitter: @MeyerResources
Categories: The Muslim Times