acm-header
Sign In

Communications of the ACM

BLOG@CACM

ChatGPT Helps or Hurts our Cybersecurity?


View as: Print Mobile App Share:
Saurabh Bagchi

Originally published on the Distant Whispers blog.

From the coverage that ChatGPT, developed by OpenAI, has been receiving since its launch in November 2022, you would be forgiven for thinking that is the only technology story around. And it deserves the spotlight. Few had expected the jaw-dropping rapid strides that this technology has made in the last few years, and it will continue to wow us this year. It has opened a bottle, and a genie with unsurpassed powers has emerged.

Here let me cast an eye on its implications for our security and privacy in the online world. And the related point of its ability to spread misinformation in our online world.

Security Attacks

ChatGPT is already being weaponized to generate security attacks, such as phishing scams. The specter is that when fully mature, this technology would be able to generate undetectable attacks and the defenders will be constantly fighting these fires. I do not subscribe to this dystopian view of the future. The fear is that it will automatically generate sophisticated attacks. But within this specialized area of automatic attack generation, we have had sophisticated tools at the disposal of the dark forces for at least a decade. Think back to the DARPA Cyber Grand Challenge competition from 2016. ChatGPT may become an ultra-sophisticated tool in that arsenal, but it is going to represent a progression, rather than a completely new threat vector.

As computer security researchers and practitioners, we have developed sophisticated defenses that have kept most of these attacks at bay. ChatGPT is also a classic case of dual use technology and I fully expect that we as defenders are going to use them to add to our defense arsenal. An important asymmetric advantage that we will have is that the technology is going to be owned by corporations that operate within the legal framework. So, they will to put guardrails against misuse of the technology, however imperfect these guardrails may be. Consequently, the beneficial uses will be easier to accomplish on this platform than the misuses.

Privacy

On the privacy side, I do not believe that ChatGPT moves the needle substantively one way or the other. We will continue to do our online searches and there will be a continuing growth of services that are privacy focused. Those of us who are more privacy conscious will gravitate toward these services (think of DuckDuckGo as a search alternative), even though there will sometime be monetary costs associated with them (think of Neeva as a search engine).

Spread of Misinformation

Now coming to the spread of misinformation in the online world, this is a definite worry with ChatGPT and other large language model (LLM for those prone to jargons or acronyms)-based tools. Compared to their power, the election misinformation campaigns of 2016 will feel distinctly homey. Today, we overcome misinformation campaigns through Twitter and other social media platforms to varying levels of success. But now imagine that it would be possible to automatically generate many variants of the same false narrative and make them sound incredibly believable. Left unchecked, this will overwhelm our collective cognitive ability to tell truth from fiction.

But here too, I believe that this dark future is not pre-ordained and we have a definite conscious choice that we can make to steer the technology along positive pathways. First, we will develop better defense mechanisms to filter out spurious content. Again, the technology owners will be responsible stewards of the technology, either by choice or due to regulation. For example, they are experimenting with putting some kind of watermark to distinguish AI-generated content. Second, this will be useful as a human training tool to tell apart misinformation from actual information. We have struggled to come up with convincing large corpuses to train humans to act as the second line of defense (after automated filters). With ChatGPT, we can relatively easily generate training examples to train users at large scales. Finally, and more speculatively, this will lead to us humans generating a better filter for which news sources to trust. Just like for COVID medical information, large parts of the population in the US developed, over time, a good "nose" for what are the authoritative sources of information. Similarly, news outlets or channels within social media will be differentiated by the level of trust people place in them. Those peddling auto-generated stories of dubious credibility will suffer and over time, wither away.

Summing Up

Taking a step back, I do believe that ChatGPT signals a major change in the way AI technology affects our world, in so many different sectors including our one of higher education. For example, how do we educate our learners in a way that adds value beyond AI tools and how do we evaluate our learners so that we are evaluating their work and not that of an AI tool. There is reason to be circumspect about the technology because it has tremendous power. With all that said, I am an optimist with respect to this technology. It portends an enormous power in the hands of us as defenders of security and privacy. And it provides an asymmetric advantage to the defenders as the technology will be owned by companies that will be bound by laws and regulations.

 

Saurabh Bagchi is a professor of Electrical and Computer Engineering and Computer Science at Purdue University, where he leads a university-wide center on resilience called CRISP. His research interests are in distributed systems and dependable computing, while he and his group have the most fun making and breaking large-scale usable software systems for the greater good.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account