Thursday, December 19, 2024

Making A Difference With Generative AI

- Advertisement -

Everyone seems to be using ChatGPT for something or the other. Here, our experts suggest some serious, game-changing applications that the technology can be put to—and the pitfalls to avoid

Social media is abuzz about people’s experiments with ChatGPT. Amongst other things, it seems to be capable of debugging and writing code (even small apps), drafting essays, poetry and emails, having a meaningful conversation with users, planning your vacation, telling you what to pack for a business trip, preparing your shopping list, extracting tasks from a conversation or meeting minutes, summarising a long text into a brief overview, writing a new episode of Star Wars, and much more!

And that is not all. Developers can also utilise the power of OpenAI’s AI models to build interactive chatbots and advanced virtual assistants, using the application programming interface (API). They can use the GPT-3 API, or join the waitlist for the GPT-4.

- Advertisement -

Many companies are also using OpenAI’s generative AI models to enhance their own applications and platforms. OpenAI offers multiple models, with different capabilities and price points. The prices are per 1,000 tokens, so customers can pay for what they use.

DuoLingo uses GPT-4 to deepen conversations, while Be My Eyes uses it to enhance visual accessibility, and Stripe uses it to combat fraud. Morgan-Stanley is using GPT-4 to organise its knowledge base, while the government of Iceland is using it to preserve its language
It is also believed that tools like ChatGPT and Dall-E (which generates images from textual prompts) will help advance the metaverse, as it enables people with no art or design background to design spaces, engage in meaningful conversations in the virtual world, and more.

“Generative AI is already being used for many art and creative domains, such as Firefly from Adobe and Picasso from Nvidia. Similarly, there are also language-specific applications around generative AI, such as composing emails, creating a summary of documents, and detecting to-dos from call transcripts. The generative AI techniques used for images and text could also be used for other kinds of data, such as chemical compound data or application log data,” says Sachindra Joshi, IBM Distinguished Engineer, Conversational Platforms, IBM Research India.

He cites the example of molecular synthesis. By capturing the language of molecules in a foundation model and using it to “generate” new ideas for drugs and other chemicals of interest, IBM Research created a large-scale and efficient molecular language model transformer that is trained on over a billion molecular text strings. This model performs better than all state-of-the-art techniques on molecular property prediction and captures short-range and long-range spatial relationships through learned attention. IBM has also partnered with NASA to build a domain-specific foundation model, trained on earth science literature, to help scientists utilise up-to-date mission data and derive insights easily from a vast corpus of research that would be otherwise challenging for anyone to thoroughly read and internalise.

Cybersecurity threats posed by ChatGPT
Steve Grobman, CTO at McAfee, explains some of the cybersecurity related concerns to us. “When it comes to ChatGPT, one of the main considerations for risk is that the bot is lowering the bar of who can create malicious threats, and improving efficiency of tasks that traditionally require a human. For example, well-crafted unique phishing messages can be created at a scale, and a wide range of malware implementations can be built by even relatively unskilled individuals. ChatGPT has attempted to prevent malicious use cases. However, there are already internet posts on how to circumvent these restrictions.

This includes using ChatGPT to build components that are benign on their own but can be stitched together to create malware,” he says. “Any new method to defend against attacks needs the ability to understand how the attacks will be created. ChatGPT helps with this, as research can test the boundaries of what attacks ChatGPT can create. What is less clear is how directly ChatGPT can auto-generate elements of the defence. While there may be some efficiencies and unique insights that ChatGPT provides, many other tools, techniques and technology will be required to defend against ChatGPT curated attacks.”

Within the threat landscape specifically, Grobman says we might witness ChatGPT’s impact enhancing the effectiveness and efficiency of cyberattacks in the future. For example, it enables spear phishing to operate at the scale of traditional bulk phishing. Attackers can now use ChatGPT to craft automated messages in bulk that are well-written and targeted to individual victims, making them more successful. “Today’s state-of-the art AI-authored content is challenging to differentiate from human-authored content. For example, McAfee recently conducted a survey where two-thirds of the 5,000 respondents could not differentiate between machine-authored and human-authored content,” he says.

He explains that ChatGPT also lowers the barrier-to-entry, making technology that traditionally required highly-skilled individuals and substantial funding, available to anyone with access to the internet! This means that less skilled attackers now have the means to generate malicious code in bulk. For example, they can ask the program to write code that will generate text messages to hundreds of individuals, much like a non-criminal marketing team might. However, instead of taking the recipient to a safe site, it directs them to a site with a malicious payload. The code in and of itself is not malicious, but it can be used to deliver dangerous content.

He signs off, saying, “As with any new or emerging technology or application, there are pros and cons. ChatGPT will be leveraged by both good and bad actors, and the cybersecurity community must remain vigilant in the ways these can be exploited.

“At IBM, we are also applying these advancements to automate and simplify the language of computing, i.e., code. Project CodeNet, our massive dataset encompassing many of the most popular coding languages from past and present, can be leveraged into a model that would be foundational to automating and modernising countless business applications. In October 2022, IBM Research and Red Hat released Project Wisdom, an effort designed to make it easier for anyone to write Ansible Playbooks with AI-generated recommendations—think pair programming with an AI in the “navigator” seat. Fuelled by foundation models born from IBM’s AI for Code efforts, Project Wisdom has the potential to dramatically boost developer productivity, extending the power of AI assistance to new domains,” he says.

Please register to view this article or log in below. Tip: Please subscribe to EFY Prime to read the Prime articles.

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics