The last decade has witnessed rapid adoption of machine learning (ML) and artificial intelligence (AI) technologies across various sectors. More recently, the introduction of generative AI, exemplified by platforms like ChatGPT, has propelled AI into the public spotlight, sparking a race for innovation. This article focuses on the dual effects of AI on cybercrime and its implications for defense.
AI tools have significantly impacted cybercrime by diminishing the need for human involvement in aspects like malware development, scams, and extortion within cybercriminal organizations. This reduces recruitment demands and lowers operational costs. Although crime-related job postings usually appear on hidden online forums and channels in Darknet, ensuring anonymity, this practice holds risks, potentially exposing criminals to whistleblowers and law enforcement.
In addition, AI provides cybercriminals a pathway to analyze large datasets, allowing them to identify vulnerabilities and high-value targets to launch more precise attacks with higher financial potential.
Another area that can flourish with AI is the development of sophisticated phishing and social engineering attacks. This includes the creation of realistic deepfakes, deceptive websites, fraudulent social media profiles, and AI-powered scam bots. For instance, in 2020, AI-driven voice cloning attack impersonated a CEO, resulting in a $240,000 theft from a UK energy company.
The utilization of AI is anticipated to also be prevalent among state-sponsored actors and criminal groups for disinformation campaigns. This involves creating and spreading deceptive content, including deepfakes, voice cloning, and developing disinformation bots. Evidence of cybercriminals using AI to manipulate social media during the COVID-19 pandemic already exists.
AI’s role also extends to streamlining the development of adaptable, sophisticated malware. AI-powered malware employs techniques to avoid detection with advanced “self-metamorphic” mechanisms. Criminals could also exploit AI for the creation of AI-powered malware development kits. DeepLocker exemplifies AI-powered malware enhancing targeted attack and detection evasion by hiding within benign applications when not targeting specific victims.
AI’s application for security will prominently be seen in threat detection and prevention, enhancing the accuracy and effectiveness of security. Conventional security tools relying on signatures and user input can struggle to detect sophisticated attacks. Consequently, an increasing number of vendors are turning to ML technologies to achieve effective threat detection. Enabling such tools to analyze large datasets for the identification of indicators of compromise, speeding up investigations, and revealing hidden patterns. Prominent examples include Cisco Secure Endpoint and Cisco Umbrella using ML to detect suspicious behavior.
Another use for AI by defenders and law enforcement is the attribution of criminal activity to adversaries (even the ones leveraging tactics to evade identification by misleading attribution) through the analysis of multiple data points, including attack signatures, malware characteristics, and historical attack patterns. By examining these datasets, AI can identify patterns that aid experts in narrowing down the potential origin of an attack. Attribution is valuable as it provides insights into the motives and capabilities of the attackers.
ML algorithms and AI are set to expand their use for automated analysis and the identification of threats. Through automated data analysis from multiple sources like threat intelligence feeds, dark web monitoring, and open-source intelligence, emerging threats can be identified and mitigated effectively. AI can also serve as a valuable tool for predictive analytics, enabling the anticipation of potential cyber threats and vulnerabilities based on historical data and patterns.
Finally, AI can serve as a valuable contributor to cybersecurity training. It can offer students personalized learning paths based on their strengths and weaknesses, adapting exercises, simulated training environments, and material based on their performance and other metrics.