In The WorldPost this week, we examine and evaluate two key developments of the digital age: the emergent geopolitics of artificial intelligence and Facebook’s recent move toward “reputational scores” as a means to signal trustworthy information to users. In an interview, AI guru Kai-Fu Lee talks about his new book, “AI Superpowers: China, Silicon Valley and the New World Order.” For Lee, who is based in Beijing, the world of AI has become a “duopoly” in which China and America competitively drive each other’s innovations forward while dominating the rest of the world.
Lee argues that China’s “scrappy” days of stealing intellectual property to get ahead are behind it. Rather, its rapid advance in AI today is due to the superior business model of its tech entrepreneurs. China, he says, has “developed a valuable methodology of tech business innovation that involves an ultra-fast iterating product design based on instantaneous feedback from massive market data.” This is thanks to a hyper-competitive business culture aimed at commercially exploiting the data trail of the country’s billion-plus consumers who use smartphones for nearly all daily transactions.
“Weaving together data from mobile payments, public services, financial management and shared mobility gives Chinese companies a deep and more multi-dimensional picture of their users,” Lee observes. “That allows their AI algorithms to precisely tailor product offerings to each individual. In the current age of AI implementation, this will likely lead to a substantial acceleration and deepening of AI’s impact across China’s economy.” He sees a “data gap” emerging with the United States where cashless and credit card-less transactions are, so far, less pervasive.
Lee envisions the parallel universes of Chinese and American AI extending globally, with the United States dominating North America, Europe and Australia, and China eventually dominating South East Asia, Africa and to some extent South America. He also discusses the downside of AI, which he believes will displace 40 percent of jobs in both the developed and developing world, leaving only creative and human service jobs that can’t be automated — such as elderly care or teaching — as the primary source of future employment. Lee believes those jobs must be accorded greater social status commensurate with their humanitarian function and must be well-paid, which can be achieved through the sharing of AI-generated wealth.
As a colony in America’s tech empire, Europe is only waking up to the AI challenge with rearguard efforts to protect the privacy of personal data. But if Europe wants to get ahead of the game and recover the ability to chart its own course, it must put its own stamp on AI, as I argue in an essay with Nicolas Berggruen. “The most promising prospect for Europe would be to blaze a different path than the United States or China,” we write. “It could put its resources behind the proposal of Tim Berners-Lee, inventor of the World Wide Web, to ‘re-decentralize’ the Internet, both to assure a fairer allocation of the digital dividend and hand back control of personal data from big tech to individuals.”
We continue: “This culture-bound constraint on data collection, in turn, would reorient the development of AI in a more social instead of consumer marketing direction, which has been the main focus of both China and Silicon Valley. Europe could further choose to compete where it has an advantage in basic science. Just as Europe joined together to create the Large Hadron Collider, the world’s largest particle accelerator, European nations could cooperate on a project to be the first to achieve super-intelligent machines, ones that surpass human capacities.”
In a brief reflection, in some ways indicative of the distinct European sensibility, the French futurist polymath Jacques Attali considers the relationship between AI and art. It is no surprise, he writes from Paris, that AI “can create artwork that is capable of triggering emotions” because it draws on the “hidden laws” of mathematical architecture that is aesthetically pleasing, as seen in the perfect geometric figure found in nature. Great artist-engineers like Leonardo da Vinci understood this well, he notes.
In response to the growing fake news environment, including Russian election meddling through social media, Facebook has developed a system to rate the trustworthiness of its users. At a recent Berggruen Institute workshop in Bellagio, Italy on how social media impacts democracy, some tech entrepreneurs expressed the view that this was the right approach but that Facebook could go further by assigning special scores to journalists with a record of trustworthy reporting.
Their idea is that a strong reputational score based on a multitude of news articles will attract more attention, just as a book on Amazon with 300 4.5 star ratings carries more weight than one with 10 three-star ratings. For journalists that are best in class, this should be a welcomed and paradigm-shifting change, as it would create opportunities to build a brand name on the back of good work. This brand name could eventually eclipse traditional media and other platforms and could lead to innovations like new writers’ guilds of excellent reporting. Fake news will continue to proliferate, but such forums will at least not be legitimized with a high journalist rating and could also appear with a clear warning of having previously published misleading information. At a minimum, such a system would reintroduce the idea of due care into journalism.
Philosopher Onora O’Neill, however, is skeptical of using reputational scoring to identify untrustworthy information. “Reputational rankings only offer useful and reliable clues to others’ trustworthiness in a limited range of cases,” she writes from London, “but unfortunately these are special instances, and we cannot generalize from them. Reputational rankings can work well when consumers rank standardized products and services, such as manufactured goods or the services provided by hotel or restaurant chains. These rankings can provide reasonably accurate indicators of trustworthiness provided they meet two conditions. First, the rankings must reflect the experiences, and not merely the attitudes, of those who have actually used (or tried to use) the standardized product or service. And second, the rankings must come from a diverse and adequately representative range of users.”
She concludes: “There is no easy short cut for detecting fake, false or flaky content, for judging whose claims are factual or evidence-based, or for telling whose commitments can and can’t be trusted. Judging who is trustworthy in which matters requires a focus on facts and evidence. Appeals to reputations and attitudes are not an adequate substitute.”
Writing from Bangalore, India, anthropologist Nicole Rigillo proposes ways to bring “dark social” content into the light in the wake of violent mob killings sparked by misinformation spread via the Facebook-owned WhatsApp. The end-to-end encryption messaging service is reportedly used by 75 percent of Indians. Rigillo proposes “a crowdsourced system managed by human moderators to monitor problematic content that users forward to them. This would provide the company access to messages without breaking encryption within the platform. WhatsApp could then cross-reference decrypted messages using its own metadata to monitor dark social sharing, flag false content or even purge such content entirely from the platform.”