Generative AI & ChatGPT: Risks to jobs, regulatory nightmare and disinformation overload
On February 6, Sundar Pichai, CEO of tech behemoth Google and Alphabet announced ‘Bard’, their experimental AI service. Google’s response comes nearly three months after a hitherto lesser-known company, OpenAI, announced ‘ChatGPT’ to the world. The last three months has seen a flurry of activity as an excited world embraced the tool, making it compose prose and poetry and figuring out how far it could go.
Tech pundits even began to write articles that ChatGPT would be a great addition, but the company had to figure out how to make money. Even before the ink could dry on those articles, Microsoft announced Microsoft Teams Premium that featured services by OpenAI’s GPT-3.5, hoping to make their online meeting and collaboration tool much more intelligent.
Three Key Impacts
As it happens with most new advancements of technology, a discussion and a debate is emerging on what AI can or can’t do. The impact that technology has on people, cultures, politics and nation-states are expected and well documented. But each new development in AI also brings forth worries about how relevant the human race will remain as functions and decision-making shifts to machines.
For now, the coming of a language model for dialogue applications (LaMDA) like ChatGPT and Bard will impact three major areas.
* First, the relationship that content plays in the evolution of societies and economics will probably be the first to face a challenge.
* Second, it will also prove to be a major challenge for regulators as they grapple with the aftereffects of a technology that has arrived.
* Third, as the technology gets weaponised, it will impact the ability to combat fake news and disinformation.
Replacing Content Creators
The coming of the internet saw the rise of a new language, culture, and multiple subcultures. It also saw economies rise, as the world wide web changed the way information was produced and consumed. ChatGPT and Bard have already begun to shift the paradigm. Once again, the language of the internet, as we understand it, is all set to change. An artificially intelligent tool is now surfing the web to find myriad sources and also develop correlations and produce not just prose, but poetry too.
Which means, economies that were built on generating content are now all set to be made redundant. Columnists are already wondering if ChatGPT can knock out a large chunk of the content-generating industry, and even functions like technical writing or legal drafting. A tool, with the right sources and keywords could emerge to be much faster and more efficient than a pool of human minds could.
A lot of the Information Technology Enabled Services (ITES) industry in emerging economies could suddenly become redundant, once again hurting emerging markets and shifting the geopolitical balance back to the advanced economies. Content creators on social media platforms that reward consistent uploads and participation in trends and memes to stay relevant on the platform now have to contend with a technology that seems to be able to generate such content easily without fatigue.
It also raises questions about the sourcing of information that generates the content by ChatGPT and its avatars. Unlike researchers who pour through reams of text and build a thesis over a period of time, ChatGPT could produce a credible piece of work in seconds. Besides raising questions about how the text was sourced and the conclusions substantiated, it will also challenge how the information that now seems credible will be consumed.
Nascent Regulatory Frameworks
Naturally, lawmakers have been worried about what AI can do. However, most recognise that it is futile to try and regulate technology. It is easier to regulate the outcomes of any technology, once they are known. In the case of AI, most countries are at a nascent stage of their regulatory frameworks. Basically, everyone is trying to grapple with the unknown.
In India, history shows that in little over a decade, it went from advocating massive computerisation for economic reasons to nearly banning it for political expediency. Advancements in technology have continued at an increasing pace despite India’s policy (or lack thereof) on its regulation. In the AI space so far, NITI Aayog has held a consultation on a National Strategy for Artificial Intelligence (NSAI) and released a two-part Approach Document titled “Responsible AI (RAI) #AIforall” which proposes principles for responsible AI and approaches for operationalising these principles.
In the United States, the National Institute of Standards and Technology is building an AI risk management framework that tries to lay down elements of “responsible development”. It is an attempt to ensure that “core concepts in responsible AI emphasise human centricity, social responsibility and sustainability”. In sectors such as medicine, the US Food and Drug Administration (FDA) is also trying to draw up an action plan in response to stakeholder feedback on how to regulate AI/ML learning-based ‘software as a medical device’ (SaMD).
Similarly, the EU has also signed an agreement with the US to not only jointly further AI research in five sectors, but to also create regulatory frameworks. The EU’s AI Act is already awaiting clearance from the European parliament and is an attempt to restrict “unacceptable risk” as well as “high risk applications”. The EU hopes that like its GDPR, the AI Act, once it becomes law, could also set global standards.
Weaponising ChatGPT
But as regulators race against time, tools like ChatGPT could also decisively change the way the consumption of information can be weaponised. This again will have profound implications for societies that are already struggling under the deluge of disinformation.
Nearly 70 years ago, Marshall McLuhan, a philosopher who defined media theory had predicted that the speed at which information travels will have a profound impact on societies. As an extension, the speed of disinformation travel has already impacted not just domestic politics, but also geopolitics.
The use of disinformation by the Russians to impact the US elections are well known and documented. The fake news farms run by teens in Macedonia, which were widely reported about during the Trump years, demonstrate how a group of people, without any state backing, can exploit data voids on the internet by flooding it with made up stories.
The exponential growth of social media has seen an equally exponential rise to disinformation, online gender based violence and even fueled genocide. Traditionally, the deliberation that went into the creation of content and its delivery played a fine balance in ensuring the credibility of not just information, but also its impact.
Social media, while “democratising” news, also brought in fake news, while devastating traditional media and the erstwhile gatekeepers of news. But it would still take information farms run by human beings to fuel disinformation targeting adversaries and citizens.
The growth of ChatGPT has now changed the paradigm, where flesh and blood are no longer needed to produce fake news. Not only can technology now produce fake news that seems completely credible, but it can also now do so in a matter of seconds or less. The weaponisation of any technology is inevitable. It is the management of its consequences that will begin to feel even more impossible than ever before.
[An edited version of this article appeared in Moneycontrol and is available at: https://www.moneycontrol.com/news/opinion/artificial-intelligence-knocks-out-jobs-fuels-disinformation-difficult-to-regulate-10029741.html]