In celebration of Cybersecurity Awareness Month, Mount Saint Mary College’s Center for Cybersecurity recently hosted an insightful public talk, “Generative AI: Opportunities, Challenges, and Security Considerations,” by Tianyu Wang, assistant professor of Computer Science at Mercy University.
Wang, whose background includes roles as a Data Scientist at IBM and a Data Security Analyst at Mount Sinai Health System, shared his expertise on the explosive growth of AI and its far-reaching implications for the field of cybersecurity.
While AI tools are rapidly being integrated into many facets of modern life, comparatively less is being done to make sure these systems can withstand an AI-based cyberattack, Wang noted.
“So 90 percent of the organizations are trying to implement AI tools in their pipeline, image management, etc.,” Wang said. “But only 22 percent of organizations have the AI strategy to defend their system. Did you see the gap? This is why we need people in cybersecurity to not only know how to implement AI tools, but also know how to protect your system to fight against an AI attack.”
Not all AI is created equal, Wang explained. Traditional AI (also called “machine learning”) is used for prediction, like spam detection. Newer generative AI can create concepts like text, images, and video. While many use these tools to enhance their creative pursuits, generative AI is also being leveraged by the unscrupulous to make highly convincing malicious content.
“What if they can generate a perfect phishing email?” Wang posited. “There’s no grammar problems, no mistakes, no issues. Everything looks real… [and] what if they also generate some malicious code?”
But it’s no longer a “What if” scenario: Attacks like these are happening every day.
Several criminal methods are being amplified by AI, Wang said. For example, in what’s known as Social Engineering, AI collects personal information and creates highly customized, believable stories to trick victims, with a reported 1,200 percent increase in AI efficiency, according to Wang.
Meanwhile, Malicious AI Tools like WormGPT are available on the dark web, operating with “zero ethical ground rules,” said Wang, and generating malicious code on demand. Even relatively established cyberattack methods like deepfaking are getting a boost: Audio can be mimicked with just 20 or 30 seconds of an individual’s voice, and convincing video deepfakes can be created quickly with basic hardware. In the last two years, according to Wang, there has been a 1,700 percent increase in deepfake fraud.
Fortunately for cybersecurity professionals – and the law-abiding public caught in the crossfire – this is absolutely a situation where one can fight fire with fire. AI is essential for effective defense against these new threats, Wang said.
“We can use AI to speed up the process so that we can have a real-time threat detection system,” he explained. “We can use AI to do some vulnerability scanning. We can even do some automated incident reporting.”
The market for AI investment in cybersecurity is projected to grow from $25 billion in 2024 to $93 billion by 2030, representing an annual growth rate of approximately 24 percent, Wang said.
The takeaway is simple: Future cybersecurity professionals must be fluent in both the opportunities and the security considerations surrounding generative AI to effectively protect systems and stay ahead of increasingly sophisticated attackers.
The Mount’s Center for Cybersecurity promotes awareness and education in the critical field of cybersecurity, preparing students for careers and serving as a resource for the community, especially during annual events like Cybersecurity Awareness Month.