Despite the escalating threats, a panel of cybersecurity professionals from notable companies, such as Amazon Web Services (AWS), Barracuda, Splunk, and others, have expressed their optimism regarding the potential of generative AI.

During this week’s Black Hat USA event, the panel took place on Tuesday. During the discussion, experts delved into the various ways that artificial intelligence (AI) is shaping the landscape of cybersecurity. This encompassed its impact on areas such as phishing and defences against it, the proliferation of ransomware, as well as its tangible contributions to threat research.

[FREE E-BOOK] The Definite Blueprint for Cybersecurity in Manufacturing

The following individuals were active participants in the panel discussion:

  • Fleming Shi, CEO of Barracuda Networks
  • Mark Ryland, Head of AWS’s CISO department
  • Dr. Amit Elazari, Co-founder and CEO of OpenPolicy, and Cyber Professor at UC Berkeley
  • Patrick Coughlin, Vice President of Global Security Markets at Splunk
  • Michael Daniel, President of Cyber Threat Alliance and former Cyber Czar during the Obama Administration

A confident Shi stated:

“We have the opportunity to escalate to the point where we can use policies to drive a better behaviour of how we actually improve cybersecurity awareness training early on for humans.”

He argued that victory might be achieved by adopting “the right posture,” which entails “getting humans up to the speed and for the future as well.” “One use case that I’d want to discuss is just-in-time training, which makes use of generative AI and, with the proper prompts and the right sort of data, can be customized to make the training more useful and more appealing. To what extent do we really enjoy learning about cyber defence? Especially for younger audiences, you add a human touch. They can gain insight from it. They will be well-prepared for their first professional encounter. So I’m hoping.”

AI cybersecurity

The ‘Amplifier’ that is a generative AI.

Ryland expressed optimism, claiming that generative AI acts as an “amplifier.”

“So there will be new things, new attacks, and new risks,”

he said.

“But I believe formal verification or expert-based rule systems combined with generative AI is a formidable force. Combining code generation with encoding of safety as a more conventional rules-based system is what is currently making developers significantly more efficient.”

Perhaps a microcosm, but I believe that a relatively large generalization can be made from that.

To sum up, Daniel is optimistic, “which may sound strange coming from the former cybersecurity coordinator of the United States.”

I believe the tools we’re discussing have “enormous potential” to make cybersecurity work more rewarding for many people. “For instance, it can remove a lot of the alert fatigue and make it much simpler for humans to concentrate on the interesting stuff. As a result, I’m optimistic that we can leverage these resources to make cybersecurity a more interesting field of study. Yes, we could go down a dumb path and have it actually prevent entry, but I think if we use it right, we can actually expand the pool and sort of think of AI as a copilot, as an assistant, and then it actually begins amplifying what the humans can do. And I believe there is absolutely enormous potential in that.”

Generative AI Makes Information Accessible to All

Elazari stated she is “very hopeful and optimistic” because generative AI has the potential to perform three things that will lead to the democratization of many things. That includes making more information available, usable, and more efficient and effective use of the information already available.

According to Coughlin, the cybersecurity sector stands to gain the most from AI.

Specifically, “When you look at the challenges that we’re trying to solve, just the challenges of wrangling massive amounts of data, the challenge of finding needles in haystacks, this is what we talk about, what we do,” he said. You can still drive a truck through the void between detection and response. When I consider AI’s potential, I think of a turbocharged life raft for the cybersecurity business that will allow us to stay up with the bad guys. And I’m not sure we’ll be able to throw enough bodies at it without it, as I believe we’ve demonstrated. What this signifies for the enterprise value capture in the cybersecurity product area has me quite optimistic.

Coughlin is less optimistic and worries that more rules are necessary.

He expressed concern that the average age of policymakers and regulators would prevent them from keeping up with the times. Thankfully, though, we have people like Dr. Amit who are working to help us with this problem. However, that worries me because I have yet to witness our ability in the regulatory arena and governance, and the United States is already lagging behind the European Union in this regard. This makes me nervous since I think it means we’ll have to work faster than ever before.

ChatGPT banned

Bans on ChatGPT: A Divided Panel

The panellists also disagreed with the findings of a recent study by BlackBerry that found 75% of enterprises globally have banned or are seriously considering banning ChatGPT and other generative AI applications in the workplace.

To ensure safe use, Elazari argues that bans are unnecessary.

“We are living in a very competitive environment,” she remarked. It’s a contest to see which companies can come up with the most creative solutions. It’s a contest not only between the opponents and everyone else but also between geopolitical powers. And in that setting, if you are not utilizing the most cutting-edge technology by opting out of… or choosing not to engage with the technology, you are giving the advantage to the other competitors who are using this powerful tool. Instead of taking a zero-sum approach, improved policies and risk management would be preferable, in my opinion.

According to Ryland, “When you hear a statistic like corporations banning the use, the analogy should be that they ban the use of Gmail for corporate work,” meaning that ChatGPT is essentially consumer technology designed for mass adoption at no fee or minimal charge.

There’s nothing wrong with Gmail. Therefore that’s the right parallel, he added. It’s completely safe. However, I do not wish to base the operations of my business on an email service whose primary target market is individual consumers and whose primary method of making money is by selling the service to them.

Shi argued that even when something is banned, people would find ways to use it anyway.

“If you enable it safely, it’s way better than banning,” he remarked.


Rhyno delivers a range of activities that combine to fully protect your infrastructure and data from cybercriminals, anywhere and everywhere, 24/7/365.


About Rhyno Cybersecurity Services

Rhyno Cybersecurity is a Canadian-based company focusing on 24/7 Managed Detection and Response, Penetration Testing, Enterprise Cloud, and Cybersecurity Solutions for small and midsize businesses.

Our products and services are robust, innovative, and cost-effective. Underpinned by our 24x7x365 Security Operations Centre (SOC), our experts ensure you have access to cybersecurity expertise when you need it the most.

This website uses cookies to improve your online experience. By continuing, we will assume that you are agreeing to our use of cookies. For more information, visit our Cookie Policy.

Privacy Preference Center