“Hello Alexa, may I have a strategy to test you? “

 

What is ChatBot?

A computer program designed to simulate conversation with human users over the Internet. Chatbots are software agents that interact with the user in a communication. A chatbot is a service which is provided by websites so that users can easily able to fetch information interactively. They can reach out to a large audience on messaging apps and be more effective. A chatbot provides a speedy and quick response and available around the clock. Chatbot will be an attempt to reduce the pain of the user and will help users in finding solutions to their problems and thus improving the security of code and infrastructure.

There are two types of chatbots:

  • Chatbots that are based on rules – They are limited in functions because they only respond to specific commands.
  • Chatbots that are based on artificial intelligence – They are more dynamic because they respond to language, and don’t require specific commands. They learn continuously from the conversations they have with people.

 

How Chatbots Work?

Chatbots send and receive messages through existing voice and text messaging platforms, like Facebook Messenger, Slack, Siri, SMS, and Amazon’s Alexa. When a user sends a message to a chatbot, or vice versa, a request is sent to the parent platform, where the identity of the sender must be verified before the data request is fulfilled. This interface allows users to get information from the brands they engage with via the messaging platforms they already use.

Much of the concern around chatbot security is centered around the applications of chatbots in the financial services sector, where communication between institutions and their clients must not only maintain the highest levels of privacy and security but also ensure compliance with industry regulations.

Banks and other financial institutions already do this through secure messaging services, which transmit client data securely via HTTPS protocols. HTTPS, in conjunction with HTTP metadata techniques, have served financial providers like Stripe with sensitive information retrieval for years. These same techniques are used by banking chatbots to secure the transmission of user data.

Good & Bad Chatbots

 

Chatbot Testing: Specifics and Techniques

KISS — One Robot, One Job

The underlying principle for a well-performing chatbot is to Keep It Simple, Stupid. Instead, define in detail the things they need to perform flawlessly and focus first on the most frequent cases, then on possible cases, and lastly address infrequent requests. Make sure chatbot can redirect the conversation back to the original scope to avoid unwanted hijacking or abandonment due to inutility.

Define Testing Methods

Testing a chatbot should address each of its components, starting with an input, the knowledge base, intelligence and reasoning, taking into consideration the infrastructure where the bot is hosted, as well as other premises like connectivity and voice communication. To test usability, it is useful to create a list of possible user inputs, together with the chatbot’s expected answer as well as potential problems such as alternative spellings to ensure it still produces the same correct outcome.

Define Metrics

All testing should be based on clear expectations defined by metrics. Even in UX, which can be highly subjective, having clear KPIs can speed up the development process significantly. Other possible useful parameters include the number of steps to perform a specific request, the percentage of returning visitors, the average time spent by the user in one session, retention rates, click-through rates, and the handling of confusion.

Chatbot-specific Testing

Although general web application security and performance testing, is required, as defined by an expert in web app testing services from A1-QA, for a great chatbot, it’s necessary to remember that usability comes first.

Interaction

Most users, even when they are aware, they are talking to a chatbot, have the tendency to treat the app as a real person due to the novelty of this technology and our brains being accustomed to human interaction. Testing should be focused on the consistency of keeping the same voice over the entire conversation. Also, there should be an upfront disclaimer about the chatbot’s abilities, limitations and preferred way of interaction for best results. If the bot is using voice, it should be able to see past noise or accents; if it accepts pictures as an input, it should guide users about the picture specs required for the underlying algorithm. Testing should also be performed in high-stress conditions to detect the system’s limits.

Answering

Following the defined testing methodology, the chatbot should perform according to the input table. Testing should detect any ambiguous commands or duplicate keywords. With the introduction of NLP, chatbots can do more than if controls, as they can parse text and create their own answers. Testing should play with different inputs and variations of the same input to identify the system’s ability to understand.

Navigation

A chatbot with great UX makes the rules clear from the start by explaining to the user how to go back to an earlier point in the conversation or how to skip to the next one if they hit a dead end. Test if your users can change their selected topic, start over, or look for something else.

Handling Errors

It is necessary for a chatbot to understand when the user made a mistake. A simple observation of getting more than two error codes in a row could be a good trigger that the bot is not performing according to its intended function.

Agile and Continuous Testing

Chatbots are excellent examples of software that can be developed using the Agile approach. The minimum viable product can be enriched during each iteration with new phrases captured by the error management functions. To ensure no bugs are crawling into the bot, testing should also be performed at each iteration.

 

How to combat threats and vulnerability by securing Chatbots?

Chatbots use two basic processes to ensure security: authentication and authorization. Requests for data through chatbots are verified using authenticated tokens, which allow users to verify their identity without repeatedly entering their login credentials.

For example, say a user wants to order an Uber ride through Facebook Messenger. While logged into Messenger, the user would send a ride request through the Messenger app. After verifying the user’s identity, the app generates a secure authentication token, which is relayed to Uber along with the ride request. The receipt of the authentication token allows Uber to verify the identity of the user, and the request is processed.

Chatbots can be secured using many of the same security strategies used for other mobile technologies:

User identity authentication: A user’s identity is verified with secure login credentials, such as a username and password. These credentials are exchanged for a secure authenticated token that is used to continually verify the identity of the user.

Authentication timeouts: Authenticated tokens can be revoked either by the user or automatically by the platform after a given amount of time.

Two-factor authentication: A user is required to verify their identity through two separate channels (e.g., once by email, then again by text message).

Biometric authentication: A user is required to verify their identity using a unique physical marker, such as a fingerprint or retina scan (e.g., Apple’s Touch ID).

End-to-end encryption: The entire conversation is encrypted so that only the two parties involved in the conversation can read it. Facebook Messenger recently implemented this capability with their Secret Messages feature, and we hope to see support for bot integration soon.

Self-destructing messages: When potentially sensitive information is transmitted, the message containing this information is destroyed after a given amount of time. Our banking chatbot Abe does this on Slack.

Intent level authorization: Intent-based communication is influenced by combining two inputs – state and context. State refers to the chat history while the context is an outcome of the analysis done on the user inputs.

Channel authorization: Chatbots have a unique convenience of being available in multiple channels like Skype for Business, Microsoft Teams, Facebook, Slack, etc. Organizations are provisioned with a feature to restrict the use of chatbot in certain communication channels in view of ensuring better security and compliance.

Intent level privacy (GDPR compliance): According to the newly framed General Data Protection Regulation (GDPR) laws organizations are forced to preserve the privacy of the personal details shared.

API Services Security Testing: While doing chatbot security testing it is necessary to do API security testing to find vulnerabilities in detail.

 

Conclusion

Concern for security is an important part of the digital age, and each new innovation comes with security concerns. However, the technologies making up most chatbots have been used to safely transmit data for many years.

Chatbot developers are keenly aware of the need for privacy and security, particularly those building chatbots for the financial industry. When these built-in security measures are combined with basic user security precautions, chatbots provide both strong security and ease of accessibility. This allows enterprises to offer their clients the best customer experience using the latest security measures.

It’s easy to become overly concerned about security with new technologies, but in the case of chatbots, this concern is largely misunderstood. Chatbots are built on the same secure Internet infrastructure and integrated platforms as websites and apps; they just provide different user experiences.

 

Author,

Mayuresh Barbade

Attack & PenTest Team

Varutra Consulting