Thursday, December 26, 2024

Artificial Intelligence: 2024 And Beyond

- Advertisement -

Brew it slowly, with a good measure of safety and ethics, to ward off bitterness and bring out the best flavour, say experts and world leaders.

It is that time of the year again, when everyone is summarising the year gone by, and speculating about the year ahead. Things are no different in the world of artificial intelligence (AI). Since the advent of ChatGPT, there is probably no topic being discoursed and debated more than AI. So much, that Collins Dictionary has declared AI to be the word of the year 2023. The dictionary defines AI as, “the modelling of human mental functions by computer programs.” That is how it has always been defined. But, at one point of time that seemed far-fetched. Now, it is real, and causing a lot of excitement and anxiety.

Bengaluru-based startup Karya employs rural Indians to source, annotate, and label AI-training data in local Indian languages (Source: karya.in)

The word of the year usually highlights the raging trend of those times. For example, in 2020 it was lockdown, and the next year it was non-fungible tokens (NFTs). These terms no longer dominate our thoughts, prompting us to wonder whether the excitement around AI will also fizzle out like past trends, or will it emerge brighter in the coming years? This reminds us of a recent remark by Vinod Khosla of Khosla Ventures, the entity that invested $50 million in OpenAI in early 2019. He remarked that the flurry of investments in AI post ChatGPT may not meet with similar success. “Most investments in AI today, venture investments, will lose money,” he said in a media interview, comparing this year’s AI hype with last year’s cryptocurrency investment activity.

- Advertisement -

The gathering at Bletchley Park, UK

2023 began with everyone exploring the potential of generative AI, especially ChatGPT, like a newly acquired toy. Then people started using it for everything—from creating characters for ads and movies to writing code and even writing media articles. As generative AI systems are trained on large data repositories, which inadvertently contain outdated or opinionated content too, people have started becoming aware of the problems in AI—from safety, security, misinformation, and privacy issues to bias and discrimination. No wonder, the year seems to be ending on a more cautious note, with nations giving a serious thought to the risks and required regulations, not as isolated efforts but collaboratively. This is because, like the internet, AI is a technology without boundaries and a combined effort is the only possible way to control the explosion.

Tech, thought and political leaders from across the world met at the first global AI Safety Summit, hosted by the UK government, in November. The agenda was to understand the risks involved in frontier AI, to build efficient guardrails, to mitigate the risks, and use the technology constructively. The summit was well-attended by political leaders from more than 25 countries, celebrated computer scientists like Yoshua Bengio, and technopreneurs like Sam Altman and Elon Musk.

Frontier AI is a trending term, that refers to highly capable general-purpose AI models, which match or exceed the capabilities of today’s most advanced models. The urgency to deal with the risks in AI stems not from the current scenario alone, but from the realisation that the next generation of AI systems could be exponentially more powerful. If the problems are not clipped at the bud, they are likely to blow up in our faces. So, the summit was an attempt to expedite work on understanding and managing the risks in frontier AI, which include both misuse risks and loss of control risks.

In the run-up to the event, UK’s Prime Minister Rishi Sunak highlighted that while AI can solve myriad problems ranging from health and drug discovery to energy management and food production, it also comes with real risks that need to be dealt with immediately. Based on reports by tech experts and the intelligence community, he pointed out several misuses of AI, ranging from terrorist activities, cyber-attacks, misinformation, and fraud, to the extremely unlikely, but not impossible risk of ‘super intelligence,’ wherein humans lose control of AI.

The first of what promises to be a series of summits, was characterised mainly by high-level discussions, and countries committing themselves to the task. Representatives from various countries, including the US, UK, Japan, France, Germany, China, India, and the European Union signed the Bletchley Declaration. They acknowledged that AI was rife with short-term and longer-term risks, ranging from cybersecurity and misinformation, to bias and privacy; and agreeing that understanding and mitigating these risks requires international collaboration and cooperation at various levels.

The declaration also highlighted the responsibilities of developers. It read—“We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures.” Sunak is also said to have made a high-level announcement about makers of AI tools agreeing to give early access to government agencies to help them assess and ensure that they are safe for public use. At the time of this story being drafted, we still have no information of what level of access is being referred to here—whether it would be just a trial-run, or code-level access.

Regulations, research, and more

The UK government also launched the AI Safety Institute, to build the intellectual and computing capacity required to examine, evaluate, and test new types of AI, and share the findings with other countries and key companies to ensure the safety of AI systems. This institute will permanentise and build on the work of the Frontier AI Taskforce, which was set up by the UK government earlier this year. Researchers at the institute will have priority access to cutting edge supercomputing infrastructure, such as the AI Research Resource, an expanding £300 million network comprising some of Europe’s largest supercomputers; as well as Bristol’s Isambard-AI and Cambridge-based Dawn, powerful supercomputers that the UK government has invested in.

On October 30th, US President Joe Biden signed an executive order that requires AI companies to share safety data, training information, and reports with the US government prior to publicly releasing large AI models or updated versions of such models. The order specifically alludes to models that contain tens of billions of parameters, trained on far-ranging data, which could pose a risk to national security, the economy, public health, or safety. The executive order emphasises eight policy goals on AI—safety and security; privacy protection; equity and civil rights; consumer protection; workforce protection and support; innovation and constructive competition; American leadership in AI; and responsible and effective use of AI by the Federal Government. The report also suggests that the US should attempt to identify, recruit, and retain AI talent, from amongst immigrants and non-immigrants, to build the required expertise and leadership. This has gained some attention in the social media, as it bodes well for Indian tech professionals and STEM students in the US.

The standards, processes, and tests required to enforce this policy will be developed by government agencies using red-teaming, a methodology wherein ethical hackers will work with the tech companies to pre-emptively identify and sort out vulnerabilities. The US government also announced the launch of its own AI Safety Institute, under the aegis of its National Institute of Standards and Technology (NIST). During the recent summit, Sunak announced that UK’s AI Safety Institute will collaborate with AI Safety Institute of the US and with the government of Singapore, another notable AI stronghold.

End of October, the G7 revealed the International Guiding Principles on artificial intelligence and a voluntary Code of Conduct for AI developers. Part of the Hiroshima AI Process that began in May this year, these guiding documents will provide actionable guidelines for governments and organisations involved in AI development.

In October, the United Nations Secretary-General António Guterres announced the creation of a new AI Advisory Body, to build a global scientific consensus on risks and challenges, strengthen international cooperation on AI governance, and enable nations to safely harness the transformative potential of AI.

India takes a balanced view of AI

At the AI Safety Summit, India’s Minister of State for Electronics and IT, Rajeev Chandrasekhar, proposed that AI should not be demonised to the extent that it is regulated out of existence. It is a kinetic enabler of India’s digital economy and presents a big opportunity for us. At the same time, he acknowledged that proper regulations must be in place to avoid misuse of the technology. He opined that in the past decade, countries across the world, including ours, inadvertently let regulations fall behind innovation, and are now having to contend with the menace of toxicity and misinformation across social media platforms. As AI has the potential to amplify toxicity and weaponisation to the next level, he said that countries should work together to be ahead, or at least at par with innovation, when it comes to regulating AI.

“The broad areas, which we need to deliberate upon, are workforce disruption by AI, its impact on privacy of individuals, weaponisation and criminalisation of AI, and what must be done to have a global, coordinated action against banned actors, who may create unsafe and untrusted models, that may be available on the dark web and can be misused,” he said to the media.

Speaking to the media after the summit, he said that these issues will be carried forward and discussed at the Global Partner for AI (GPAI) Summit that India is chairing in December 2023. He also said that India will strive to create an early regulatory framework for AI, within the next five or six months. Pointing out that innovation is happening at hyper speed, he stressed that countries must address this issue urgently without spending two or three years in intellectual debate.

AI – To be or not to be

Outside Bletchley Park, a group of protestors, under the banner of ‘Pause AI,’ were seeking a temporary pause on the training of AI systems more powerful than OpenAI’s GPT-4. Speaking to the press, Mustafa Suleyman, the cofounder of Google DeepMind and now the CEO of startup Inflection AI, said that, while he disagreed with those seeking a pause on next generation AI systems, the industry may have to consider that course of action sometime soon. “I do not think there is any evidence today that frontier models of the size of GPT-4 present any significant catastrophic harms, let alone any existential harms. It is objectively clear that there is incredible value to people in the world. But it is a very sensible question to ask, as we create models which are 10 times larger, 100 times larger, 1000 times larger, which is going to happen over the next three or four years,” he said.

Industry attendees had also remarked in social media about the evergreen debate of open source versus closed-source approaches to AI research. While some felt that it was too risky to freely distribute the source code of powerful AI models, the open source community argued that open sourcing the models will help speed up and intensify safety research rather than the code being within the realms of profit-driven companies.

Union Minister Rajeev Chandrasekhar at the AI Safety Summit held in UK in November 2023 (Source: Press Information Bureau)

It is interesting to note that the event happened at Bletchley Park, a stately mansion near London, which was once the secret home of the ‘code-breakers,’ including Alan Turing, who helped the Allied Forces defeat the Nazis during the second world war by cracking the German Enigma code. Symbolically, it is hoped that the summit will result in a strong collaboration between nations aiming to build effective guardrails for the proper use of AI. However, some cynics remind us that the code-breakers team later evolved into UK’s most powerful intelligence agency, which, in cahoots with the US, spied on the rest of the world!

What is happening at OpenAI: The Sam Altman Files
Even as this issue is about to go to press, there is a series of breaking news about Sam Altman, CEO of OpenAI. On November 17th, OpenAI announced that Sam Altman would be leaving the board, and that current CTO Mira Murati would take over as interim CEO. The official statement alleged that Altman was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” and that, “the board no longer has confidence in his ability to continue leading OpenAI.”

Speculation is rife that there have been several disagreements within the board and amongst senior employees of OpenAI, over safe and responsible development of AI tech, and whether the business motives of the company were clashing swords with the non-profit ideals. Readers might recall that this is not the first time the OpenAI board has had a fallout over safety-related concerns.

Unhappy with the sacking of Altman, co-founder Greg Brockman and three senior scientists also resigned. A majority of OpenAI’s employees also protested against the board’s move. When Murati too reacted in favour of Altman, the OpenAI board replaced her with Emmett Shear, former CEO of Twitch, as the interim CEO. Soon thereafter, Microsoft announced that Altman and Brockman would be joining Microsoft and leading a new advanced AI research team. It looked like the entire company against the board. On November 22nd, five days after the original statement, it came to be known that Altman would be reinstated as CEO of OpenAI, and would work under the supervision of a newly-constituted board.

The soup sure is boiling, and we will be ready to serve you more news on this in the subsequent issues.

Regulations are rife, yet innovation thrives

The idea behind these regulatory efforts is not to dampen the growth of AI—because everyone realises that AI can play a very constructive role in this world. As a simple example, take AI4Bharat, a government-backed initiative at IIT Madras, which develops open source datasets, tools, models, and applications for Indian languages. Microsoft Jugalbandi is a generative AI chatbot for government assistance, powered by AI4Bharat. Local users can ask the chatbot a question in their own language—either voice or text—and get a response in the same language. The chatbot retrieves relevant content, usually in English, and translates it into the local language for the user. The National Payments Corporation of India (NPCI) is working with AI4Bharat to facilitate voice-based merchant payments and peer-to-peer transactions in local Indian languages. This one example is enough to show the role of AI in bridging the digital divide. But there is more if you wish to know.

Karya, a Bengaluru-based startup founded by Stanford-alumnus Manu Chopra, focuses on sourcing, annotating, and labelling non-English data, with high accuracy. The 2021 startup, which predates the ChatGPT buzz, promises its clients high-quality local-language content, eliminating bias, discrimination, and misinformation at the data level. AI services trained using only English content often tend to have an improper view of other cultures. In a media story, Stanford University professor Mehran Sahami explained that it is critical to have a broad representation of training data, including non-English data, so AI systems do not perpetuate harmful stereotypes, produce hate speech, or yield misinformation. Karya attempts to bridge this gap by collecting content in a wide range of Indian languages. The startup achieves this by employing workers, especially women, from rural areas. Their app allows workers to enter content even without Internet access and provides voice support for those with limited literacy. Supported by grants, Karya pays the workers nearly 20 times the prevailing market rate, to ensure they maintain a high quality of work. According to a news report, over 32,000 crowdsourced workers have logged into the app in India, completing 40 million digital tasks, including image recognition, contour alignments, video annotation, and speech annotation. Karya is now a sought-after partner for tech giants like Microsoft and Google, who aim to ultra-localise AI.

On the tech front, people are betting on quantum computing to give AI an unprecedented thrust. With that kind of computing power, AI can help us understand several natural phenomena and find ways to sort out problems ranging from poverty to global warming.

And then, there is xAI, Elon Musk’s ‘truth-seeking’ AI model. Released to a select audience in November this year, it is touted to be a serious competition for OpenAI’s ChatGPT, Google’s Bard, and Anthropic’s Claude. In another interesting marketing spin, we see AI being positioned as a coworker or collaborator, assuaging the job-stealer image it has acquired. Recently released Microsoft Copilot hopes to be your ‘everyday AI companion,’ taking mundane tasks off users’ minds, reducing their stress, and helping them to collaborate and work better. Microsoft thinks Copilot subscriptions could rake in more than $10 billion per year by 2026.

From online retail, quick-service restaurants and social media platforms to financial institutions, innumerable organisations seem to be introducing AI-driven features in their products and platforms. In a media report, Shopify’s Chief Financial Officer Jeff Hoffmeister remarked that the company’s AI tools are like a ‘superpower’ for sellers. Google has also been talking about their latest AI features helping small businesses and merchants create an impact this holiday season. Google’s AI-powered Product Studio lets merchants and advertisers create new product imagery for free, simply by typing in a prompt of the image they want to use. Airbnb also seems to be betting big on AI. If rumours are to be believed, Instagram is working on a trailblazing feature that lets users create personalised AI chatbots that can engage in conversations, answer questions, and offer support.

On the usage front, people continue to find interesting uses for AI, even as many industry leaders have barred their employees from using it for writing code and other content. A South Indian movie maker, for example, used AI to create a younger version of the lead actor, for the flashback scenes.

The more AI is used, the more we hear of lawsuits being filed against AI companies—concerning misinformation, defamation, intellectual property rights, and more. Recently, Scarlett Johansson (Black Widow in the Avengers movies) filed a case against Lisa AI, for using her face and voice in an AI-generated advertisement, without her permission. Tom Hanks also alerted his fans of a video promoting a dental plan that used an AI version of him, without his permission. According to a report in The Guardian, comedian Sarah Silverman has also sued OpenAI and Meta for copyright infringement.

The job dilemma

Elon Musk famously remarked to Sunak during the Bletchley Summit that AI has the potential to take away all jobs! “You can have a job if you want a job… but AI will be able to do everything. It’s hard to say exactly what that moment is, but there will come a point where no job is needed,” he said. A 2023 report by Goldman Sachs also says that two-thirds of occupations could be partially automated by AI. The Future of Jobs 2023 report by the World Economic Forum states that, “Artificial intelligence, a key driver of potential algorithmic displacement, is expected to be adopted by nearly 75% of surveyed companies and is expected to lead to high churn—with 50% of organisations expecting it to create job growth and 25% expecting it to create job losses.”

AI is sure to shake-up the jobs as they exist today, but it is also likely to create new job opportunities. Recent research by Pearson, for ServiceNow, revealed that AI and automation will require 16.2 million workers in India to reskill and upskill, while also creating 4.7 million new tech jobs. According to the report, technology will transform the tasks that make up each job but presents an unprecedented chance for Indian workers to reshape and future-proof their careers. With NASSCOM predicting that AI and automation could add up to $500 billion to India’s GDP by 2025, it would be wise for people to skill up to work ‘with’ AI in the coming year. AI’s insatiable thirst for data is also creating more job opportunities, not just for the tech workforce, but also for non-skilled rural population, as Karya has proven. NASSCOM predicts that India alone is expected to have nearly one million data annotation workers by 2030!

It is clear from happenings around the world that no country intends to strike down AI. Of course, the risks are real too, which makes regulations essential—and it does seem to be raining regulations this monsoon. Indeed, ethical, and safe use of AI is likely to be the dominant theme of 2024, but rather than killing AI, it will eventually strengthen the ecosystem further, leading to controlled and responsible growth and adoption.


Janani G. Vikram is a freelance writer based in Chennai, who loves to write on emerging technologies and Indian culture. She believes in relishing every moment of life, as happy memories are the best savings for the future

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics