(334) 290-7631

Any technology brings benefits as well as possible challenges, and Generative AI (e.g. ChatGPT) is no exception. ChatGPT is a type of artificial intelligence language model (“GPT” stands for generative pre-trained transformer) that carries potential for business uses. Whatever challenges this prevents in terms of cybersecurity will become apparent. No matter what the technology, safeguards will still revolve around people, processes and technology. Read on to learn more about ChatGPT, its potential uses, and the challenges it may bring.

 

What ChatGPT is, and Why it Matters

 

ChatGPT, a product from Open AI, is a Large Language Model (LLM) built on datasets from the Internet and pre-trained to give responses to questions, generate content, and make user interfaces more personal and interactive. Predictive text is already prevalent in email applications, wherein the application tries to guess the next few words or next sentence. All the user has to do is click the tab button to accept or continue typing to override the suggestion. Similarly, the artificial intelligence powering ChatGPT can help generate text by prompting the writer with suggestions based on Internet data. Organizations can save time and improve customer service, content creation, research and even automate customer service analytics. Generative artificial intelligence is the enabling technology for ChatGPT, and uses are probably limited only to the human imagination. Artificial intelligence puts together information from the Internet, but it’s up to the user to judge the content’s usefulness and accuracy. 

 

Early Adoption of ChatGPT Progresses Quickly

 

While not yet audited for bias and accuracy, ChatGPT has still become popular, and will probably become even more so, with so many quickly adopting it. Technological innovations like the telephone and electricity took decades to reach ubiquity, nearly eighty years in the case of the telephone. Electricity, first introduced at the Chicago World’s Fair in 1893, was thought marvelous–it too had its risks, including fires from improper wiring. According to a CompTIA article, the 1893 fire had the effect of starting a national certification for electricians based on agreed-to standards. And standards for use of ChatGPT have yet to be formulated. 


Even with its quick adoption, use of large language models like ChatGPT produces questions. 

For one, how does the use of ChatGPT help business objectives? Use cases can include improving the personalization of user interfaces, content generation, or automating customer service analytics. Another question has to do with where the data comes from, and how it’s changed. Businesses also need to consider where data comes from, and put into place governance which managers communicate to their reports–educating them about when AI can and should be used. Moreover, like any technology, ChatGPT can be exploited by bad actors who use AI to develop more sophisticated phishing schemes and even to spoof legitimate websites. 

 

Security Risks of AI and its Applications

 

Phishing and Malware

 

Any new technology can potentially be hijacked by bad actors seeking to steal data. The greater field of results offered by large language models like ChatGPT may enable even amateur hackers more data to work with. They can then introduce malicious code and formulate malware, offered up to unwitting email recipients via “phishing”–pretending to be a legitimate entity and hence stealing email login credentials and other sensitive data. AI-generated malware can in turn invade a company’s entire network. Phishing schemes also have the potential to become more sophisticated since, thanks to AI’s availability in multiple languages, professional-looking emails can be produced that can fool readers who might already know traits of phishing messages like typos and spelling errors. 

 

Production of Fake Websites

 

In a similar vein, content could be produced to generate fake websites designed to harvest personally identifying information. Logos and text could very closely imitate genuine websites that fool visitors into thinking they’re on a business website–maybe your website. Many bad actors are taking advantage of the topic by setting up sites to collect Personally Identifiable Information (PII) from unsuspecting visitors by using ChatGPT, Generative AI and LLM topics as the hook,

 

Data Security at Risk

 

Aside from malware, phishing and fake websites, large language models can put data at risk. What about the servers storing data used by AI? How safe are they? How accurate are the results? And how private is the data? Trained data used by the AI supporting ChatGPT is massive, and is not subject to permissions for use and upload. It is also unknown if conversational data is encrypted, so this data may not be private, either. Infact, uploading information to ChatGPT places it in the public domain. While much is yet unknown, current safeguards (people, processes and technology are still needed to manage risks.

 

Staying Secure When Using ChatGPT

 

On the business side, companies need to keep a pulse on the development of ChatGPT, developing new policies and updating older ones. What will the business use ChatGPT for, and when? Where does the data come from, and how will it be used? Companies need to take a holistic approach to security, setting ground rules for use of ChatGPT and educating everyone in the company on those rules. 

 

On the end-user side, individuals need to be vigilant about what data they supply to ChatGPT and to its source, the Internet. They still need to know the signs of a phishing email, perhaps treating any unsolicited email as a possible phishing attempt. 

 

Although Open AI takes security and privacy seriously, hazards may still exist. Like any tool, ChatGPT needs to be used carefully, in line with business goals. For more assistance, contact your trusted technology advisor today. 

Skip to content