Cybersecurity: Chatbots promise efficiency but also pose a threat
By Carl Mazzanti
Technology can be a two-faced tool, providing a boost for solos, firms and other organizations on the one hand, while also posing a threat. This is illustrated by one of the newest and most exciting advances, artificial intelligence, and one of its spinoffs: natural language processing chatbots. ChatGPT (Generative Pre-trained Transformer), a large-language model trained by OpenAI, and other chatbots may provide value, but they also present potentially serious threats to security and reputation. Firms, however, can work with their IT service providers to reduce the risks.
Attorneys can leverage legal advice chatbots to increase engagement with their website by automatically asking questions, suggesting help or offering details for scheduling meetings. Chatbots can also digitally collate accident details, health information, and other client intake information so an attorney can quickly decide whether to accept or reject a case. And chatbots can record all interactions with potential leads and clients, freeing up attorneys to analyze the information at their convenience before meeting with a potential or existing client.
The very strengths of chatbots, however, can also expose attorneys to legal, regulatory, and liability risks. Because they are designed to interact with users in natural language, a chatbot could be tricked into revealing confidential information to unauthorized users. For example, chatbots can be integrated with other systems, including customer relationship management architecture, which can contain a vast amount of sensitive information. If an outsider gets into the system and asks the ChatGPT module for an individual’s password or credit card information, the chatbot may not recognize this as sensitive information and release it to the unauthorized user.
Another security risk is the susceptibility of chatbots to hacking. Malicious code may be injected, or social engineering tactics could cause the chatbot to perform harmful actions. For example, a hacker could create a chatbot that appears to be a legitimate customer service representative, enabling it to collect sensitive information from an unwary law firm or other users.
The very ease of interaction with chatbots could create a situation that puts a firm’s reputation at risk. Because a chatbot is designed to interact with a user in “natural language,” it may inadvertently provide incorrect, unsuitable or unprofessional responses. For example, if a user were to ask ChatGPT for advice on a sensitive issue such as sexual harassment, it could potentially provide an inappropriate answer. This could damage the law firm’s reputation and expose it to adverse legal or regulatory consequences.
As an initial security step, before even purchasing and deploying a chatbot, a firm may consider restricting its vendor search to reputable providers that have a proven track record of security and privacy, and the depth to provide ongoing, timely maintenance and other support for the system.
And to mitigate a wide range of chatbot-related risks, businesses can consult with a reputable cyber security solutions firm, which may suggest strategies that limit the amount of sensitive information that chatbots can access. For example, the bots may be prevented from accessing credit card or password information. Additionally, a cyber security partner can regularly review chatbot logs for unusual activity, which can help firms detect and respond to potential security threats.
Chatbots can also be trained to recognize and respond appropriately to queries about sensitive topics. So, if an attorney or other user asks for advice on an issue like sexual harassment, the chatbot can be designed to respond with a message that simply directs the user to an appropriate resource, such as a human resources representative. Chatbots should regularly be updated to address known security vulnerabilities. Taking these steps can help prevent hackers from exploiting vulnerabilities and accessing sensitive information.
Chatbots are a powerful, emerging tool that can provide significant value. But they come with potential security risks. Practices like consulting an experienced IT security firm before purchasing a chatbot and after deploying it can mitigate the risks while realizing the maximum benefits of chatbots. Skipping any of these steps, however, can be like issuing an open invitation to hackers and other unauthorized users.
Carl Mazzanti is president of eMazzanti Technologies, a cyber security and IT support organization based in Hoboken, NJ. The company can be reached at [email protected].