Chatbot vs. person: Research on human-machine interaction

Dr Jenny Lynden, a Staff Tutor in the School, has been researching human-machine interaction.

In a recent article in the journal Discourse, Context & Media, Jenny, together with a colleague from the University of Surrey and designers from a chatbot company, AMO, published findings from their research. They used a microlevel sociolinguistic analysis to understand how humans and machines interact with each other.  The research involved an analysis of different types of communication which are likely to lead to higher levels of user engagement with the chatbot.  

Interestingly, when people engage with chatbots online they usually report perceiving them as social actors and expect the chatbot to behave like a ‘human’.   This research found that there were a number of exchanges where the chatbot contravened social conventions in human interaction, particularly when the bot’s responses failed to recognise and acknowledge user’s ‘rights to knowledge’, competently answer user’s questions, or did not give the user an opportunity to ‘repair’ or correct misinformation.  As a result of the analysis, the AMO chatbot designers reprogrammed the chatbot and are currently evaluating the impact of this on the user experience.  


This paper discusses how a microlevel linguistic analysis, using interactional sociolinguistics as an umbrella framework and drawing on analytical concepts from politeness theory and conversation analysis, can be used to advise chatbot designers on the interactional features contributing to problematic human user engagement as part of a consultancy project. Existing research using a microlevel linguistic analysis has analysed human user:bot interactions using natural language. This research has identified a central role for language which promotes sociability between the machine and users in the alignment of their goals and practices. However, there is no research currently which discusses how a microlevel linguistic analysis can help identify how the discursive construction of alignment and affiliation within prompt:response chatbots supports social presence and trust. This paper addresses this gap through an analysis of a database of prompt:response chatbot interactions which identified problematic sequences involving misalignment and disaffiliation, undermining human users’ trust and sense of social presence within the interaction. It also reports on how the consultancy project suggested changes to the programming of the chatbot which have potential to lead to improved user engagement and satisfaction.

Doris Dippold, Jenny Lynden, Rob Shrubsall, and Rich Ingram (2020) ‘A turn to language: How interactional sociolinguistics informs the redesign of prompt:response chatbot turns’ Discourse, Context & Media vol 37


Open University staff and students can access the journal Discourse, Context & the Media through the OU library.

Read more about Jenny Lynden’s work here http://www.open.ac.uk/people/jml364

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s