The rapid and widespread adoption of artificial intelligence (AI) has ushered in a new era of technological advancement, revolutionizing various industries and becoming immensely popular worldwide. AI-driven applications and solutions have streamlined processes, improved efficiency, and enhanced the overall user experience. However, this surge in AI’s popularity also comes with a dark side. Malicious actors have swiftly capitalized on this powerful technology to their own advantage, opening up avenues for accelerated cybercrime with the new generative AI tool called WormGPT. Its creator considers it as the "biggest enemy of the well-known ChatGPT."
When ChatGPT launched its plugins in March it came with a disclaimer that people should trust a plugin before they use it. However, recent research has recorded methods through which ChatGPT plugins focusing on the use of OAuth, a web standard that allows you to share data across online accounts, can be used for unauthorized retrieval of chat history, acquisition of personal information, and enable remote execution of code on an individual’s device. Unfortunately, similar risks also apply to any Large Language Models (LLMs) or generative AI systems connected to the web.
Adding to the severity of the situation, malicious actors are actively promoting “jailbreaks” for ChatGPT, creating custom prompts and inputs with the intention of manipulating the WormGPT tool to produce output that may lead to the disclosure of sensitive information, generate inappropriate content, or execute harmful code.
Wired speaks about being cautious of the plugins and has the spokesperson from OpenAI, Niko Felix stating "the company is working to improve ChatGPT against "exploits" that can lead to its system being abused." With WormGPT surfacing on underground forums and being described as the ultimate means for threat actors to carry out advanced phishing and business email compromise (BEC) attacks, the challenge of mitigating AI-enabled cyber threats becomes increasingly complex.
AI-enabled cyber threats pose a real challenge to ensuring the security and integrity of systems. Among the crucial measures to safeguard against these threats, server hardening is a vital practice. Server hardening involves configuring servers and their operating systems to minimize potential vulnerabilities, reduce attack surfaces, and strengthen overall defense mechanisms.
By implementing strict security controls and removing unnecessary software or services, server hardening helps strengthen the infrastructure against unauthorized access, data breaches, and potential exploitation of AI systems. With AI technologies becoming more integral to critical functions, the resilience of these systems through proper server hardening becomes paramount in protecting sensitive data, intellectual property, and maintaining the trust of users and stakeholders.