First Monday

Chatbot-mediated public service delivery: A public service value-based framework by Tendai Makasi, Alireza Nili, Kevin C. Desouza, and Mary Tate

Chatbots — computer programs designed to interactively engage with users, replicating humanlike conversational capabilities during service encounters — have been increasingly deployed across a wide range of Internet-based public services. While chatbots provide several advantages (e.g., improved user experience with reduced waiting times to service access), the surge of chatbot use in public service delivery has frequently been plagued with controversy, poor publicity, and legal challenges. One important reason for this is that users of the services, and the wider public, do not always feel that chatbot-mediated services demonstrate the appropriate public service values. We investigate the public service value dimensions required in chatbots designed for use in the public sector. Specifically, we (a) review chatbots and their use in the delivery of public services; and, (b) develop a framework of how public service values can be exemplified by chatbots. Our study provides implications and evaluation criteria for stakeholders in chatbot assisted public services, including researchers, public managers, and citizens.


Chatbots and public service delivery
Public service values
Framework development




If you go online to access a public service, you are likely to encounter a pop-up assistant. Increasingly, this assistant is not a “real person” but a chatbot supported by artificial intelligence. Public agencies introduce chatbots for many reasons, but a common reason is to save money, by providing automated answers to simple and frequent queries. Saving public money, it is argued, benefits the public by maximizing the effectiveness of their tax payments.

However, due to the nature of how chatbots are designed, how they function, their opaqueness, and their potential impacts on the public, chatbots are frequently mired in social, ethical, and political concerns (Desouza, 2018; Desouza and Krishnamurthy, 2017; Winfield, 2019). Chatbots in the public sector have been plagued with controversy, poor publicity, and legal challenges (Petriv, et al., 2020; van Noordt and Misuraca, 2019). One of the reasons for this is that users of the services, and the wider public, do not always feel that chatbot-mediated services demonstrate the appropriate public service values (Thierer, et al., 2017). For example, Nadia — a chatbot designed by the Department of Human Services (DHS) in Australia — is one example of a chatbot that was discontinued after public concerns over its ability to meet the needs of the disabled populace (Probyn, 2017).

The notion of “value” in the public sphere is broader than simply saving money. It needs to encompass ideas such as trust, fairness, and transparency. These are captured in the broad concept of “public value” which encompasses ideas about the proper purpose of government, guidelines for those in positions of authority in government, and technical ideas that can be used to measure and guide the performance of public agencies. Taking this broader perspective can help both those directly involved in the development and delivery of public service chatbots, and also users and other stakeholders, to evaluate them. However, this notion of public value is very abstract. A closely related, but more understandable notion is “public service values” which are values held by public servants, individually and collectively. They are defined as a collection of social, professional, ethical, and other values that facilitate reasonable, legitimate, and relevant actions in the public sector (Witesman and Walters, 2014). Public service values in turn can be broken down into a number of dimensions, as we will describe.

Chatbots in the public sector have often been viewed as a means for improving efficiency (Bannister and Connolly, 2014) and the need to demonstrate public service values has often been neglected (van Doorn, et al., 2017). In this paper, we develop a framework that looks at how public service value can be realized with chatbots. Our discussion is illustrated by examples of current deployments of chatbots.



Chatbots and public service delivery

Among the wide assortment of Artificial Intelligence technologies, chatbots have seen a significant uptake by public agencies. Chatbots are deployed across a range of public services ranging from immigration, law enforcement, health, transportation, and utilities, among others (Mehr, 2017; Wirtz, et al., 2019). They emulate conversations with humans using natural language processing capabilities, enabling them to recognise requests and facilitate text-based or voice-based dialogues (Cassell, et al., 2000; Abu Shawar and Atwell, 2007) to respond to service enquiries and predict user behaviour based on previous inquiries of similar nature. Variations of chatbot interfaces are driven by the user’s device of access (e.g., mobile phone, laptop) and the platform (e.g., messaging app, Web page) that supports the chatbot. The advantages of chatbots include lowered service delivery costs, reduced employee workloads and efficient service delivery. For service recipients, advantages can include improved user experience with reduced waiting times to service access and providing quick access links to domain-specific knowledge (Brandtzæg and Følstad, 2018; Følstad, et al., 2018). For example, in the U.K., the hugely successful DoNotPay site, which already offered AI-supported legal advice (Lawyer bot, developed by Joshua Browder) for contesting traffic tickets, has recently been extended with a bot that supports people evicted from their accommodation to seek a new home (Walker, 2017).

Chatbots are increasingly deployed as a digital channel of interaction to deliver public services at varying levels of sophistication (Nili, et al., 2019; Riikkinen, et al., 2018). Service delivery frequently requires mutual exchange of information between the service user and the service provider (Radnor, et al., 2013). There are three main levels in chatbot-mediated service delivery (Figure 1), which we derived based on discussions of chatbot architectures and characteristics by Riikkinen, et al. (2018) and Montenegro, et al. (2019). The first level “information provisioning” involves providing information about available services, based on questions or search terms provided by the user. This typically does not require authentication, and is similar to an automated ‘frequently asked questions’ service. The second service level “targeted assistance” involves collection of relevant, personalized, service information and analysing it, and usually requires the disclosure of identifying information by the user. The third level, “service negotiation” is when possible service outcomes are negotiated between the user and the provider. At each level, using the chatbot can be complemented with intervention from a human service agent if the chatbot detects that the user is not receiving a relevant response (e.g., when the user asks similar questions with the same intent multiple times). For example, Ask Jamie, a chatbot that supports Singapore citizens with identifying the public agency Web sites they need to visit for specific queries, escalates conversation to a live chat with a human service agent (Govtech Singapore, 2019). We explain these three levels of chatbot-mediated service delivery below.


Chatbot-mediated public service delivery levels
Figure 1: Chatbot-mediated public service delivery levels.


Service information provisioning

Service information provisioning involves responding to a user’s general service query, without the need for the user to authenticate themselves. Often, individuals identify the need for a public service when they experience a change of circumstance in their lives (e.g., a change in their employment status, a medical condition). An individual may begin with a general enquiry related to what service or services the organisation offers to help addressing the need and seek information on how to proceed. This initial enquiry may be answered by a chatbot which will attempt to interpret the enquiry and match the user to information and service links (Androutsopoulou, et al., 2019; Nili, et al., 2019). “Sam” a chatbot introduced by the Australian Department of Health provides users with information about the available services when presented with the user’s social situation (Head to Health, 2018). The platform introduces Sam as a chatbot “to help you find the information and resources ... relevant to your needs, particularly if you’re not sure where to start. You can tell Sam what’s going on and be guided to resources that might be relevant for you”. Another example of chatbots that provides basic service information is WienBot, a chatbot launched in Vienna in 2017. WienBot provides answers to frequently asked questions people have about the different services the city of Vienna provides (e.g., availability of public parking spaces) (WienBot, 2017).

Targeted assistance

The targeted assistance service level offers an additional level of sophistication in chatbot-mediated service delivery. A chatbot uses the user’s personal information that enables it to personalise a response. Such data may include information such as employment status and marital status. Sometimes this may be stored in an online profile that the chatbot is permitted to access. In addition, the chatbot collects information about service variants and related business rules which are stored in a different database (Ni, et al., 2017; Venkatesh, 2018). For less sensitive public services such as recommendations for transportation services, the chatbot may provide an ‘immediate’ personalised response to the user without any intervention from a domain expert inquiry. Singapore’s ‘Bus Uncle’ chatbot provides users with personalised journey planning information based on gathering and analysing user information collected during the inquiry and collected from the user profile data. The chatbot uses the user’s current location, retrieves the user’s home or work address from their stored profile, and analyses the available options from the bus provider database to recommend a personalised travel journey (Bus Uncle, 2016). For more sensitive services such as medical advice, the chatbot may not provide any immediate personalised response. Instead, it may send information based on historical and real-time interactions for review to a human agent for a final decision to be made. For example, GYANT, a chatbot that mimics doctor-patient conversation, uses the information a patient provides during the conversation, plus medical history from their profile, plus information from specialised knowledgebases to develop recommendations to a health professional (medical doctor or nurse). The health professional will review the recommendations and provide a prescription or personalised advice to the patient (e.g., Price, 2018). Each interaction updates the stored service history for the patient.

Service negotiation

Service negotiation involves providing detailed information on the service outcome options and negotiating the best option to address the service consumer’s needs. This means that the customer is presented with service options that suit his or her circumstances and is able to have a conversation about selecting an option. The conversation will proceed as a series of questions and answers with the chatbot with the possibility of back-tracking to an earlier stage of the negotiations (Androutsopoulou, et al., 2019) and may offer even further personalisation of the final selected service option (Nili, et al., 2019; Zumstein and Hundertmark, 2017). Moreover, because many public services require ongoing interactions between a service recipient and the public agency, this level also ideally supports the negotiations in an ongoing basis. For example, a person’s circumstances may change over time (e.g., achieving financial stability a few months after losing a job) and the chatbot can recommend new service options that suit the most recent circumstances. Currently, chatbots that support this step within the public sector are limited in numbers. One reason could be that implementing such an architecture often proves to be challenging for public agencies who are cautious of public opinion and risky decisions related to expensive technological infrastructure and concerns related to potential issues with privacy of data (Androutsopoulou, et al., 2019). However, the pace of development of AI chatbots is such that we believe use of such advanced chatbots in the public service will accelerate.



Public service values

Public service values described within a technology context are the justifications for the actions taken and procedures facilitated by digital (Internet-based) technologies and platforms for service delivery (Grimsley and Meehan, 2007; Harrison, et al., 2012; Kearns, 2004). Discussion of “public service values” falls within the broader discussion of “public value” and describes that subset of public value that is relevant to people’s performance of the design and delivery of public services. We examined “public value” research more generally, especially where it focussed on the perspectives of public manager and public servants, for insights into value dimensions that are relevant to our context (chatbots substitute human agents and represent a public organisation in a chatbot-mediated public service delivery). We examined conceptualizations of public value following Moore’s (1995) normative approach, which was developed to inform public managers on the strategic creation of public value. Moore (1995) outlines a framework known as the strategic triangle, that seeks to account for public value by concurrently addressing three questions: (1) What public value is being created?; (2) Where does the legitimacy and support to create the public value come from?; and, (3) what operational capacity is required to deliver the public value? However, Moore’s (1995) widely acknowledged framework for creating public value fails to concretely describe what constitutes public value. In efforts to describe what constitutes public value, several authors (e.g., Alford and O’Flynn, 2009; Benington, 2009; Kelly, et al., 2002; Stoker, 2006) have built upon Moore’s framework to create an inventory of different forms of public values (Bozeman, 2007; Talbot, 2011). Some of these are also relevant to public service delivery and have informed our definitions, which we provide in the next section.



Framework development

Existing approaches to evaluate chatbots from a user perspective frequently focus on factors that affect the individual user experience at the interaction level (Fernandes, et al., 2020). These approaches are often from the human computer interaction perspective, focusing on empathy and the extent to which the users can easily interact with the chatbot or are often from the information quality perspective such as chatbot’s ability to provide timely and relevant assistance or information (Brandtzæg and Følstad, 2017; Chaves and Gerosa, 2019; Dennis, et al., 2020; Maniou and Veglis, 2020). Users lose confidence and often show frustration towards any chatbot that takes time or fails to understand their needs (Chaves and Gerosa, 2019; Jain, et al., 2018). Similarly, a chatbot that generates irrelevant responses compromises the users’ perceptions towards the chatbot’s abilities. Dependability (the extent to which the user feels in control of the interaction) has also been considered as an important factor that users value during the interaction with chatbots (Chaves and Gerosa, 2019; Holmes, et al., 2019).

While these are all critical factors in satisfying the individual users’ needs during chatbot interactions, we observe that these factors do not comprehensively address public service values for delivering public services via chatbots. We explain the public service values within the chatbot-mediated service delivery context. We then present our public service values framework for chatbot-mediated public service delivery.

Public service values within the chatbot-mediated service delivery context

We first compiled a list of public service values and their definitions and adapted them to the chatbot delivery context. Analysing the definitions of these public service values, we reconciled public service values that had similar descriptions or were closely related (e.g., transparency and openness). In these circumstances, we adopted the most widely-used label within the reviewed articles. Where this was not possible, we considered the label for a public service value which has been suggested by studies that focus on the role of ICT technologies in delivering public services (e.g., privacy as expressed by Andrews (2018) was adopted over secrecy and security of information). Next, we looked at how the public service values and their definitions play out with AI-based technologies in public service delivery (e.g., Androutsopoulou, et al., 2019; Barth and Arnold, 1999; Valle-Cruz, et al., 2019). We then adapted the definitions for the features of chatbots. We established a total of 14 public service values (see Table 1), which we explain below. In order to triangulate and further elaborate on the findings, we sought examples from practitioner sources, including 15 public sector chatbots used in public service delivery across the U.S., Europe, Asia, and Australia (see Appendix for a list of these chatbots). These are used to illustrate the dimensions we describe.

Table 1


Table 1: List of public service values defined in the context of chatbot-supported public service delivery.
Public service valueAdapted definition of public service value in the context of chatbot-supported public service deliveryReferences
AdaptabilityThe degree to which the chatbot adapts itself to varying conditions (to varying non-technical conditions such as changes in business rules and service eligibility and to varying technical conditions such as adapting to different devices and networks) when being used in delivering a service.Karunasena and Deng, 2009; Meynhardt, 2009
User orientationThe degree to which the chatbot accommodates user expressions and needs when being in use for resolving a case.Bozeman, 2019; Prebble, 2015
ProfessionalismThe degree to which the chatbot displays a principled, competent, honest, respectful, consistent, and trustworthy conduct when it is being used in delivering a service.Eriksson, et al., 2020; Meynhardt and Bartholomes, 2011; Thompson, 2016
EffectivenessThe degree to which the chatbot produces an outcome that is supposed to be produced (attaining the goal of using the chatbot) given the resources invested when it is being used in delivering a service.Andrews, 2018; Witesman and Walters, 2014
EfficiencyThe degree to which the chatbot facilitates service delivery while minimizing the cost and resources required.Karunasena and Deng, 2009; Meynhardt, 2009
FairnessThe degree to which favouritism and discrimination (based on individual differences) does not exist when the chatbot is being used in delivering a service.Bozeman, 2019; Brewer, et al., 2013; de Graaf, 2015; de Graaf, et al., 2014
LegitimacyThe degree to which the chatbot complies/conforms with lawful and reasonable steps and mandates when being used in delivering a service.Andersen, et al., 2012; Jørgensen and Bozeman, 2007; Kernaghan, 2003
AcceptabilityThe degree to which the chatbot is a viable option for service delivery and is beneficial in a way that public shows minimal or favorable reaction to using it.Andrews, 2018; Prebble, 2015; Witesman and Walters, 2014
OpennessThe degree to which chatbots transparently disclose their identity to users before a service interaction starts and provide the rationale (to users or their representatives such as customer advocacy groups) for decision-making when delivering a service.Andersen, et al., 2012; Jørgensen and Bozeman, 2002
AccountabilityThe degree to which the chatbot represents an accountable/responsible channel (including acknowledgement of limitations) when being in use for delivering a service.Blader and Tyler, 2003; Tyler, 2003
Social licenseThe degree to which the chatbot acquires continued approval as a viable service delivery channel from the community and other stakeholders.Buhmann, 2016; Gunningham, et al., 2004; Morrison, 2014
PrivacyThe degree to which the chatbot ensures protection of user’s information during and after being in use for delivering a service via any device such as smart phones.Andrews, 2018; Bozeman, 2019
Trust in governmentThe degree to which a chatbot contributes towards the public’s wilful sharing of their personal information with the government regardless of any associated vulnerabilities, such as access to personal information and services it recommends to the user.Mayer, et al., 1995; Schoorman, et al., 2007
Collaborative intelligenceThe degree to which a chatbot partners with the user and other service stakeholders, complimenting their skills to fulfil service needs.Butcher, et al., 2019; Epstein, 2015


Adaptability in public service delivery is about how well a public service is delivered under changing circumstances (Karunasena and Deng, 2009; Meynhardt, 2009). Chatbots need capabilities to adapt to varying non-technical conditions such as changes in business rules and service eligibility and to varying technical conditions such as adapting to different networks and devices (e.g., PC, laptop, iPad, and mobile phone) (Kowatsch, et al., 2017; Kuenzig, 2018). Hong Kong’s MTR, a chatbot designed to personalise communication with public transport riders (MTR, 2018) is able to instantly adapt to dynamically changing technical conditions, such as changes in the networks and their security protocols (e.g., from the Internet service provider to WIFI) while the user is using a transportation service. The chatbot also provides updated information immediately once there is a change in transportation service options and conditions. In terms of adaptability to non-technical changes, Alex, the Australian Tax Office’s chatbot, is an example of chatbot that provides updated information by adapting to changes in an organisation’s business rules for tax purposes and the related service variants for citizens.

User orientation is present when service users have the ability to control, plan and decide on actions to address their needs (Bozeman, 2019; Prebble, 2015). Chatbots are increasingly expected to accommodate user expressions, their needs and circumstances in service delivery (Kuenzig, 2018; Suganuma, et al., 2018). Transport for New South Wales introduced Real-time Intelligent Transport Assistant (RITA) as a service chatbot for the local commuters, which was intended to provide real-time public transport schedule updates (Graham, 2018). However, RITA was only able to provide generic responses and was not capable of taking specific user’s travel plans into account. Moreover, it continuously failed to maintain a conversation about user preferences, leaving the user with no control on expressing their service needs. This led to a public resentment which rendered RITA a failure.

Professionalism involves the public service organizations employees’ display of competency, integrity, honesty, and trustworthiness during service delivery (Eriksson, et al., 2020; Meynhardt and Bartholomes, 2011; Thompson, 2016). Because chatbots mimic the conversation between a person and an organisation’s human service delivery agent, the manner in which a chatbot presents itself to the public impacts users’ perceptions of professionalism (Behera, 2016; Sangroya, et al., 2017). Kamu, a Finnish immigration service chatbot that facilitated significant improvements in handling customer inquiries within its first year of inception (Finnish Immigration Service, 2018), was carefully designed with contributions from the department’s user facing employees, and managed to provide a similar experience of professional service delivery (Miessner, 2019a).

Effectiveness within a public service context refers to achieving a desired service outcome given the resources invested and circumstances at hand (Andrews, 2018; Witesman and Walters, 2014). In the context of chatbot-mediated service delivery, effectiveness relates to the chatbot’s capability to handle service requests given the service recipient’s life circumstances (e.g., a health condition, loss of a job or financial hardship) and specific service needs, which are identified from customer profile and during conversation with the chatbot (Cameron, et al.,, 2018; Petriv, et al., 2020). Amelia, a chatbot designed for the Enfield City Council to respond to general public queries, was discontinued due to its continued failure in providing useful information to public queries. Amelia lacked effectiveness, as it constantly provided users with service recommendations which were not matching their service needs (Enfield Council, 2016; Everett, 2017).

Efficiency is the ability to achieve an outcome at a minimal resource cost and time (Karunasena and Deng, 2009; Meynhardt, 2009). Although efficiency does not necessarily result in effectiveness, public service organizations often feel the pressure of the need for operating in an efficient manner. The two primary aims are delivering the needs of the public in a timely fashion and to reduce the cost of service delivery (Kernaghan, 2003). Similarly, when chatbots are introduced in public service delivery, they are expected to facilitate efficient service delivery, particularly because of their instant analysis of service enquiries from users (Brandtzæg and Følstad, 2018; Følstad, et al., 2018). Queensland Government’s two chatbots, MANDI and SANDI, provide the local citizens with timely and relevant information to settle various neighbourhood disputes (Queensland Civil and Administrative Tribunal (QCAT), 2019b; Queensland Government, 2019). The chatbots engaged in more than 10,000 interactions within the first six months of inception, and have since continued to gain public popularity as an efficient service channel (Queensland Department of Justice and Attorney General, 2019; Queensland Civil and Administrative Tribunal (QCAT), 2019a).

Fairness is when service delivery is free from favouritism and discrimination based on individual differences (Bozeman, 2019; Brewer, et al., 2013; de Graaf, 2015; de Graaf, et al., 2014; Jørgensen and Bozeman, 2007). Facilitating equal opportunities and equal treatment among individuals and societal groups is an important expectation (Brewer, et al., 2013; de Graaf, et al., 2014; Jørgensen and Bozeman, 2007). When chatbots replace public employees during public service encounters, ensuring fairness in the chatbot-mediated service delivery is expected (Schanke, et al., 2020; Schlesinger, et al., 2018). Emma, a chatbot that serves as a U.S. Citizenship and Immigration services assistant, interacts with diverse users from various language backgrounds. EMMA strives to facilitate equal access to information, by providing its services in different languages (currently, English and Spanish) for people from different ethnicities and societal groups. The chatbot handles around half a million interactions on a monthly basis (Sanchez and Pacheco, 2016). Interestingly, chatbots can sometimes improve the potential for fairness in service delivery, as they do not have the same unconscious biases that human agents have.

Legitimacy refers to the conformance with the lawful and reasonable policies and legislations through the application of primarily formal mandates (Andersen, et al., 2012; Jørgensen and Bozeman, 2007; Kernaghan, 2003). The level of authority that public agencies have over the services citizens receive is determined and controlled through the application of relevant legislation and regulations (de Graaf, et al., 2014). When introducing chatbots into the public service experience, they should conform to lawful and reasonable steps and mandates related to service delivery during service encounters (Seering, et al., 2019). In the context of AI-based service delivery, the development of such mandates is a work-in-progress for many administrations, and the progress is slower than the speed of advances in chatbot capabilities. Travel for London (TfL) designed TravelBot to provide transport service updates for the customers. TravelBot requires users to share their current location and frequent commute (e.g., work and home addresses) to provide highly personalised recommendations (Travel for London (TfL), 2017). In compliance with the general data protection regulation principles, TravelBot first requests permission from the user to use the current location feature and presents links to relevant regulations. It also allows users to manually enter their location in case the user is not willing to share their location through this feature.

Acceptability involves building confidence that a process or a product is beneficial in a way that the public shows favourable (or minimal negative) reaction to it (Andrews, 2018; Prebble, 2015; Witesman and Walters, 2014). Similar to other Internet-based self-service technologies, introducing chatbots requires fostering its viability as a beneficial option, resulting in adoption of the system by a wide group of users (Andrews, 2018). Issues with acceptability are more common in chatbots that provide specialist advice as compared to chatbots that provide general information. A study on a sample of chatbot users in the U.K. showed that more than 75 percent of users are willing to engage a health chatbot only for less specialised services (e.g., seeking general health information or making appointments with local health facilities), while a health chatbot that provides specialist advice or medical tests proved less popular (Nadarzynski, et al., 2019). These findings are consistent with the case of Ada Health GmbH, a chatbot in Germany that provides specialist medical diagnosis with a high user satisfaction but continuously fails to attract large numbers of new patients (Laumer, et al., 2019).

Openness requires relevant information and insights about an organization’s policies and processes related to delivering public services to be readily and transparently available to the public (Andersen, et al., 2012; Jørgensen and Bozeman, 2002). The public reserve the right to know the procedures and criteria used for decision-making (Andersen, et al., 2012; Jos and Tompkins, 2009). Chatbots should disclose their identity to the users before a service interaction starts, and the rationale used by them in decision-making and delivering public services should be transparent in the public’s view or to their representatives (e.g., customer advocacy groups) (Valério, et al., 2017). Education Value Added Assessment System (EVAAS) — which, similar to public service chatbots — is an AI driven public system, was used as a performance assessment algorithm for teachers in the U.S. and led to the dismissal of several teachers (Webb and Harden, 2017). Given that the algorithms behind EVAAS assessment criteria were not openly shared with the concerned members of the public or their representatives, the employee dismissals were rendered unjustified, which also ruled for the discontinuation of the use of system within the public sector.

Accountability is building confidence in the public that procedural and outcome justification exists, and the public agency is responsible for decisions and actions related to delivery of public services and for actions which result in a service delivery failure (Blader and Tyler, 2003; Tyler, 2003). In the chatbot-mediated service delivery context, the chatbot should be presented as an accountable channel of service delivery. In other words, if the chatbots provides a misleading advice, leading to a service failure, people know that the public service organisation accepts the responsibility for the failure. It is also important that the chatbot acknowledges its limitations (Kuenzig, 2018; Valério, et al., 2017). For example, Alex, the Australian Tax Offices chatbot starts the conversation with an introductory message that introduces itself as a bot, and acknowledges that it is still learning from user interactions (CX Central, 2016; Redrup, 2016). This prepares the users in advance for the level of service support to expect from the chatbot and at the same time preparing them to seek for alternative service assistance (contacting a human service agent) in cases that involve more complex service interactions.

Social license conveys developing informal and ongoing stakeholder trust, engagement and support for actions and processes that may be legal but are not necessarily popular or widely accepted by public (Buhmann, 2016; Gunningham, et al., 2004; Morrison, 2014). For chatbots, gaining social license is important if agencies use chatbots rather than humans to deliver a service; and algorithms to make decisions about what service a user will receive (Følstad and Brandtzæg, 2017; O’Neil, 2016). The Department of Human Services (DHS) in Australia designed Nadia as a chatbot intended to assist disabled service users for their applications for social welfare services (Hendry, 2017). Given the nature of the intended users, it could not be guaranteed that the users would possess the necessary capabilities to interact with a non-human agent. Various stakeholders including the National Disability Insurance Agency (NDIA) indicated a lack of confidence in Nadia’s capabilities to deliver the intended services to the targeted users, leading to the suspension of the initiative (Hendry, 2018, 2017).

Privacy is establishing the protection and safety of people’s identity and personal information (Andrews, 2018; Bozeman, 2019). Public agencies collect vast amounts of confidential information about citizens during service delivery and should appropriately handle this information in order to prevent it from falling into the wrong hands or being used for unwarranted purposes (Karunasena and Deng, 2009). At the same time, a highly important privacy concern for users of digital agents centres around the non-consented potential sharing (leading to misuse) of personal data with third-party organizations (Sweeney and Davis, forthcoming). In chatbots, privacy relates to how a chatbot ensures protection of user’s personal information, regarding how the information is collected, stored, protected from unauthorised access, and is used before, during and after conversation with the chatbot’s interface (Androutsopoulou, et al., 2019). Consider, Rammas — a chatbot that handles service queries for the Dubai Electricity and Water Authority (DEWA) from service recipients, suppliers, and job applicants. RAMMAS secures all personalised user interactions with passwords (Sutton, 2018). Ensured privacy implies that RAMMAS is recognised by the users as a trusted channel to facilitate confidential transactions such as bill inquiries and payments, job applications, and inquiries and tracking the status of applications (Sutton, 2019).

Trust in government involves the public’s willingness to share confidential information that potentially renders them vulnerable to the government’s actions based on the expectation that the government will perform benefiting actions (Mayer, et al., 1995; Schoorman, et al., 2007). In the context of chatbots, trust represents the degree to which a chatbot contributes towards the public’s wilful sharing of their personal information with the government regardless of any associated vulnerabilities, such as access to personal information and services it recommends to the user (Androutsopoulou, et al., 2019; Froy, 2019). Babylon chatbot — a chatbot backed by the National Health Services (NHS), U.K. to assist with patient diagnosis was branded ‘a dangerous app’ by the public after it posed a series of questions to the users that were unrealistic or controversial (e.g., asking a 66-year-old woman if she was pregnant) (Blanchard, 2019). This in turn affected the public’s trust in the NHS.

Collaborative intelligence refers to the effective partnership between stakeholders in a manner that complements the skills of all when designing and establishing a service (Butcher, et al., 2019; Epstein, 2015). When designing a public service chatbot, the design process should follow a holistic approach that involves contributions from various stakeholders with complementary skills and expertise (e.g., designers, software developers, legal consultants, academic researchers, and customer advocacy groups) (Wilson, et al., 2017). Kamu, PatRek, and VeroBot are three networked chatbots designed to provide service to foreign entrepreneurs in Finland (Miessner, 2019b; PwC EU Services, 2019). Establishing such a networked setup involved contributions from diverse stakeholders with varying backgrounds, including: entrepreneurs, service managers, designers and software developers to ensure the seamless integration of the systems, resulting in a rigorous overall chatbot-mediated service delivery (PwC EU Services, 2019). The form of collaborative intelligence could be dynamic. For example, there is potential for collaboration between the chatbot, domain experts who train the chatbot (e.g., via training data and upgrading their capabilities), and the public who share their information with it.

A public service value framework for chatbot-mediated public service delivery

All chatbots should demonstrate the full range of public service values in their interactions with the public. However, we suggest that both the challenges, and the importance of understanding and embodying public values increase significantly with the level of sophistication of the bot. We summarize this in Figure 2.


A public service value framework for chatbot-mediated service delivery
Figure 2: A public service value framework for chatbot-mediated service delivery.


For example, it is much more difficult to be open about the basis on which an answer is provided, if the answer is highly contextualized, personalized, and uses a generative algorithm (Level 3) than if it is simply a lookup based on a decision table (Level 1). Effectively, each answer may be to some extent unique, so the algorithm needs to be audited. Achieving public acceptance and social licence, and reassuring the public about the fairness of answers will be more challenging if chatbots are used as well as, or instead of, human agents to handle more complex enquiries and negotiations. While chatbot technology is maturing rapidly, it may not be effective or efficient to implement “Level 3” type chatbot service agents in the short to medium term. The complexity of the software, the necessity for rich training data, and the challenges of auditing generative algorithms to ensure fairness and openness may constrain implementation of “higher level” services.

Although the importance and difficulty of embodying public service values increase in general with greater sophistication, we suggest that the importance of some values will increase correspondingly more sharply. For Level 2 chatbots, the need to uphold privacy and a sense of trust in government increases greatly as people authenticate, give permission for their data to be accessed, and trust the chatbot with greater amounts of personal and contextual information, while Level 1 services can frequently be offered without the need for authentication or extensive disclosure of personal information. Level 3 chatbots may be facilitating and supporting complex and contextual collaboration, communication and negotiation. The requirement to show a commitment to collaborating effectively with the user and other stakeholders increases greatly as more sophisticated service negotiation is offered.




Chatbots are increasingly expected to actively contribute towards the realisation of a service interaction, playing a role that mimics conversations with a human service agent (e.g., learning from service encounters as well as negotiating a service offering) instead of the more common transaction processing role that is fully structured. Therefore, they need to demonstrate the full range of public service values expected of a human agent, and, as with human agents, these values need to be designed-in to the service development and delivery processes. Our first contribution is to clarify the levels of service chatbots can support and apply this to public service value dimensions, demonstrated in our proposed public service values framework for chatbot-mediated public service delivery (Figure 2). We explained that the difficulty and importance of demonstrating some public service values can increase with level of sophistication of the chatbot. In particular, the importance of protecting privacy, maintaining trust in government, and demonstrating collaborative intelligence ability increase sharply with more sophisticated services.

Our findings can be useful for users of chatbots, researchers, and for public policy makers, public managers and technical managers who supervise service management and channel management departments by contributing to more thoughtful designs of public services. The decisions made during the phases of chatbot development (including the initial planning, design and evaluation) need to be grounded upon appropriate comprehension of the public service values. For instance, fairness can be supported if the individuals involved in managing the chatbot are able to determine the chatbot elements (e.g., training data) that are important for minimizing favouritism and discrimination in public service delivery. Our findings can be adopted as part of the service standards that a chatbot-mediated service needs to meet, and therefore act as criteria for evaluating the designated service as well. Users of chatbots can evaluate their own service experiences in a wider context.

We presented the public service values as distinct dimensions; however, we do not rule out the possibility of relationships between some of these. For instance, a claim can be made in the case of fairness and adaptability in the sense that a chatbot can only exhibit fairness if it is adaptable. Another relationship can also be inferred between acceptability and social licence. Finally, future researchers can also conduct studies that contribute to establishing mechanisms of translating the principles and policies that govern the conduct of human public service agents into chatbots that serve in the public sector.




Careful attention to upholding these values in chatbot-mediated service delivery has a range of benefits for the public, including better and more flexible services, improved service levels, and improved sense of safety and trust in government. It also has corresponding benefits for public agencies, including improved reach, trust and quality in service delivery. However, there are also many negative consequences for both users and agencies associated with failing to uphold these values. Many of these negative consequences have been “writ large” in negative publicity and lawsuits, so public agencies should not need much persuading of the importance of values-based design and delivery. While the capabilities of chatbots are advancing fast, our understanding of public service values that chatbots need to support is limited. The more advanced a chatbot is (i.e., the more levels of service delivery it covers), the more important and difficult this challenge becomes. There is a real risk of the technical capabilities of chatbots running well ahead of understanding of how to manage their public service value role. Given the lack of an overview of chatbots and the public service values they need to support in the public sector, our conceptual framework presents relevant insights from academic and practitioner literature that are essential to consider in order to uphold public service value with the introduction of chatbots. Attention to these dimensions is essential for maintaining citizen satisfaction and trust with public services in a chatbot-mediated environment. End of article


About the authors

Tendai Makasi is currently a Ph.D. candidate at the School of Information Systems at Queensland University of Technology. His research focuses on aspects of design and evaluation of artificial intelligence-driven technologies within the public sector. He has a background in mathematics and computer science, and holds a Master’s degree in business process management from the Queensland University of Technology.
E-mail: tendai [dot] makasi [at] hdr [dot] qut [dot] edu [dot] au

Alireza Nili received his Ph.D. in information systems from Victoria University of Wellington in New Zealand in 2016, and is currently a lecturer in service science at the School of Information Systems at Queensland University of Technology (QUT) in Australia. He specializes in both behavioral information systems (customer decision making and use behavior) and design information systems and uses both qualitative and quantitative methods in his research. His research interests primarily focus on the design and evaluation of chatbots and other Internet-based self-service technologies for delivering public services (e.g., education and health). He has published in journals such as International Journal of Information Management, MIT Sloan Management Review, Communications of the Association for Information Systems, and Electronic Commerce Research, and has presented his research in several international information systems conferences. He has served roles such as Associate Editor at the International Conference on Information Systems and roles such as Track Chair and Program Committee member at the Australasian Conference on Information Systems.
E-mail: a [dot] nili [at] qut [dot] edu [dot] au

Kevin C. Desouza is a Professor of Business, Technology and Strategy in the School of Management at the QUT Business School at the Queensland University of Technology. He is a Non-resident Senior Fellow in the Governance Studies Program at the Brookings Institution. He formerly held tenured faculty posts at Arizona State University, Virginia Tech, and the University of Washington and has held visiting appointments at the London School of Economics and Political Science, Università Bocconi, University of the Witwatersrand and University of Ljubljana. Desouza has authored, co-authored, and/or edited nine books. He has published more than 140 articles in journals across a range of disciplines including information systems, information science, public administration, political science, technology management, and urban affairs. Several outlets have featured his work including MIT Sloan Management Review, Stanford Social Innovation Research, Harvard Business Review, Forbes, Businessweek, Wired, Governing,, Wall Street Journal, USA Today, NPR, PBS, and Computerworld. Desouza has advised, briefed, and/or consulted for major international corporations, non-governmental organizations, and public agencies on strategic management issues ranging from management of information systems, to knowledge management, innovation programs, crisis management, and leadership development. He serves as senior editor for the Journal of Strategic Information Systems. Desouza has received over $US2 million in research funding from both private and government organizations.
E-mail: kevin [dot] desouza [at] qut [dot] edu [dot] au

Mary Tate is an associate professor at Victoria University of Wellington in New Zealand. Mary’s research interests focus on digital channels and services using new technologies, especially e-services, e-HRM, welfare services, use of agents/bots, shared services and multi-channel strategy. This is complimented by a strong interest in the foundations of the information systems discipline, theory and research methods. Mary’s work has been recognized with more than $1 million in research grant funding, mainly in Australia. Before joining Victoria University of Wellington, Mary had an extensive background as an IT practitioner, with over 20 years’ experience in service delivery, project management, Web site management, and business analysis. Mary has published numerous conference and journal papers such as papers in MIT Sloan Management Review, European Journal of Information Systems, Journal of the Association for Information Systems, Information & Management, and International Journal of Information Management.
E-mail: mary [dot] tate [at] vuw [dot] ac [dot] nz



The authors have no known conflicts of interest to disclose.



B. Abu Shawar and E. Atwell, 2007. “Chatbots: Are they really useful?” Journal for Language Technology and Computational Linguistics, volume 22, number 1, pp. 29–49, and at, accessed 2 November 2020.

J. Alford and J. O’Flynn, 2009. “Making sense of public value: Concepts, critiques and emergent meanings,” International Journal of Public Administration, volume 32, numbers 3–4, pp. 171–191.
doi:, accessed 2 November 2020.

L.B. Andersen, T.B. Jørgensen, A.M. Kjeldsen, L.H. Pedersen, and K. Vrangbæk, 2012. “Public value dimensions: Developing and testing a multi-dimensional classification,” International Journal of Public Administration, volume 35, number 11, pp. 715–728.
doi:, accessed 2 November 2020.

L. Andrews, 2018. “Public administration, public leadership and the construction of public value in the age of the algorithm and ‘big data’,” Public Administration, volume 97, number 2, pp. 296–310.
doi:, accessed 2 November 2020.

A. Androutsopoulou, N. Karacapilidis, E. Loukis, and Y. Charalabidis, 2019. “Transforming the communication between citizens and government through AI-guided chatbots,” Government Information Quarterly, volume 36, number 2, pp. 358–367.
doi:, accessed 2 November 2020.

F. Bannister and R. Connolly, 2014. “ICT, public values and transformative government: A framework and programme for research,” Government Information Quarterly, volume 31, number 1, pp. 119–128.
doi:, accessed 2 November 2020.

T.J. Barth and E. Arnold, 1999. “Artificial intelligence and administrative discretion: Implications for public administration,” American Review of Public Administration, volume 29, number 4, pp. 332–351.
doi:, accessed 2 November 2020.

B. Behera, 2016. “Chappie — A semi-automatic intelligent chatbot,” at, accessed 18 April 2019.

J. Benington, 2009. “Creating the public in order to create public value?” International Journal of Public Administration, volume 32, numbers 3–4, pp. 232–249.
doi:, accessed 2 November 2020.

S.L. Blader and T.R. Tyler, 2003. “A four-component model of procedural justice: Defining the meaning of a ‘fair’ process,” Personality and Social Psychology Bulletin, volume 29, number 6, pp. 747–758.
doi:, accessed 2 November 2020.

S. Blanchard, 2019. “NHS-backed GP chatbot asks a 66-year-old woman if she’s PREGNANT before failing to suggest a breast lump could be cancer,” Daily Mail (27 February), at, accessed 3 April 2019.

B. Bozeman, 2019. “Public values: Citizens’ perspective,” Public Management Review, volume 21, number 6, pp. 817–838.
doi:, accessed 2 November 2020.

B. Bozeman, 2007. Public values, In: B. Bozeman. Public values and public interest: Counterbalancing economic individualism. Washington, D.C.: Georgetown University Press, pp. 132–158.

P.B. Brandtzæg and A. Følstad, 2018. “Chatbots: Changing user needs and motivations,” Interactions, volume 25, number 5, pp. 38–43, and at, accessed 2 November 2020.

P.B. Brandtzæg and A. Følstad, 2017. “Why people use chatbots,” In: I. Kompatsiaris, J. Cave, A. Satsiou, G. Carle, A. Passani, E. Kontopoulos, S. Diplaris, and D. McMillan (editors). Internet science. Cham, Switzerland: Springer, pp. 377–392.
doi:, accessed 2 November 2020.

B. Brewer, J.Y.H. Leung, and I. Scott, 2013. “Values in perspective: Administrative ethics and the Hong Kong public servant revisited,” Administration & Society, volume 46, number 8, pp. 908–928.
doi:, accessed 2 November 2020.

K. Buhmann, 2016. “Public regulators and CSR: the ‘social licence to operate’ in recent united nations instruments on business and human rights and the juridification of CSR,” Journal of Business Ethics, volume 136, number 4, pp. 699–714.
doi:, accessed 2 November 2020.

Bus Uncle, 2016. “Bus Uncle is Singapore’s favorite chatbot,” at, accessed on 7 January 2020.

J.R. Butcher, D.J. Gilchrist, J. Phillimore, and J. Wanna, 2019. “Attributes of effective collaboration: Insights from five case studies in Australia and New Zealand,” Policy Design and Practice, volume 2, number 1, pp. 75–89.
doi:, accessed 2 November 2020.

G. Cameron, D. Cameron, G. Megaw, R. Bond, M. Mulvenna, S. O’Neill, C. Armour, and M. McTear, 2018. “Best practices for designing chatbots in mental healthcare: A case study on iHelpr,” HCI ’18: Proceedings of the 32nd International BCS Human Computer Interaction Conference, article number 129, pp. 1–5.
doi:, accessed 2 November 2020.

J. Cassell, J. Sullivan, S. Prevost, and E. Churchill (editors), 2000. Embodied conversational agents. Cambridge, Mass.: MIT Press.

A.P. Chaves and M.A. Gerosa, 2019. “How should my chatbot interact? A survey on human-chatbot interaction design,” arXiv:1904.02743 (4 April), at, accessed 2 November 2020.

CX Central, 2016. “How the Australian Tax Office is using a virtual assistant to improve self-service” (9 December), at, accessed 2 November 2020.

G. de Graaf, 2015. “The bright future of value pluralism in public administration,” Administration & Society, volume 47, number 9, pp. 1,094–1,102.
doi:, accessed 2 November 2020.

G. de Graaf, L. Huberts, and R. Smulders, 2014. “Coping with public value conflicts,” Administration & Society, volume 48, number 9, pp. 1,101–1,127.
doi:, accessed 2 November 2020.

A.R. Dennis, A. Kim, M. Rahimi, and S. Ayabakan, 2020. “User reactions to COVID-19 screening chatbots from reputable providers,” Journal of the American Medical Informatics Association (6 July).
doi:, accessed 2 November 2020.

K.C. Desouza, (2018. “Delivering artificial intelligence in government: Challenges and opportunities,” at, accessed on 11 November 2019.

K.C. Desouza and R. Krishnamurthy, 2017. “Chatbots move public sector toward artificial intelligence” (2 June), at, accessed 14 January 2019.

Enfield Council, 2016. “Reports on current council issues” (4 October), at, accessed 21 September 2019.

S.L. Epstein, 2015. “Wanted: Collaborative intelligence,” Artificial Intelligence, volume 221, pp. 36–45.
doi:, accessed 2 November 2020.

E. Eriksson, T. Andersson, A. Hellström, C. Gadolin, and S. Lifvergren, 2020. “Collaborative public management: Coordinated value propositions among public service organizations,” Public Management Review, volume 22, number6, pp. 791–812.
doi:, accessed 2 November 2020.

C. Everett, 2017. “Could AI chatbots be the new face of local gov? Enfield Council thinks so,” Diginomica (23 February), at, accessed 13 May 2019.

S. Fernandes, R. Gawas, P. Alvares, M. Femandes, D. Kale, and S. Aswale, 2020. “Survey on various conversational systems,” 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE).
doi:, accessed 2 November 2020.

Finnish Immigration Service (Maahanmuuttovirasto Migrationsverket), 2018. “Chatbot,” at, accessed 14 January 2020.

A. Følstad and P.B. Brandtzæg, 2017. “Chatbots and the new world of HCI,” Interactions, volume 24, number 4, pp. 38–42, at, accessed 2 November 2020.

A. Følstad, P.B. Brandtzæg, T. Feltwell, E.L.-C. Law, M. Tscheligi, and E.A. Luger, 2018. “Chatbots for social good,” CHI EA ’18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems paper number SIG06, pp. 1–4.
doi:, accessed 2 November 2020.

A. Froy, 2019. “Why the world needs trustworthy chatbots” (8 February), at, accessed 1 March 2020.

Govtech Singapore, 2019. “Get to know the GovTech team behind Ask Jamie, the government chatbot” (15 May), at, accessed 24 February 2020.

B. Graham, 2018. “Transport for NSW’s interactive Facebook assistant slammed for posting error messages and website links,” (10 July), at, accessed 23 April 2019.

M. Grimsley and A. Meehan, 2007. “e-Government information systems: Evaluation-led design for public value and client trust,” European Journal of Information Systems, volume 16, number 2, pp. 134–148.
doi:, accessed 2 November 2020.

N. Gunningham, R.A. Kagan, and D. Thornton, 2004. “Social license and environmental protection: Why businesses go beyond compliance,” Law & Social Inquiry, volume 29, number 2, pp. 307–341.
doi:, accessed 2 November 2020.

T.M. Harrison, S. Guerrero, G.B. Burke, M. Cook, A. Cresswell, N. Helbig, J. Hrdinova, and T. Pardo, 2012. “Open government and e-government: Democratic challenges from a public value perspective,” Information Polity, volume 17, number 2, pp. 83–97.
doi:, accessed 2 November 2020.

Head to Health, 2018. “Have you tried Sam the Chatbot?” Australian Department of Health, at, accessed 9 December 2019.

J. Hendry, 2018. “NDIS’ great bot hope Nadia takes more time off for stress leave,” IT News (10 December), at, accessed 14 December 2019.

J. Hendry, 2017. “Slow start for Nadia bot as govt pushes trials into next year,” IT News (27 October), at, accessed 12 December 2018.

S. Holmes, A. Moorhead, R. Bond, H. Zheng, V. Coates, and M. Mctear, 2019. “Usability testing of a healthcare chatbot: Can we use conventional methods to assess conversational user interfaces?” ECCE 2019: Proceedings of the 31st European Conference on Cognitive Ergonomics, pp. 207–214.
doi:, accessed 2 November 2020.

M. Jain, P. Kumar, R. Kota, and S.N. Patel, 2018. “Evaluating and informing the design of chatbots,” DIS ’18: Proceedings of the 2018 Designing Interactive Systems Conference, pp. 895–906.
doi:, accessed 2 November 2020.

T.B. Jørgensen and B. Bozeman, 2007. “Public values: An inventory,” Administration & Society, volume 39, number 3, pp. 354–381.
doi:, accessed 2 November 2020.

T.B. Jørgensen and B. Bozeman, 2002. “Public values lost? Comparing cases on contracting out from Denmark and the United States,” Public Management Review, volume 4, number 1, pp. 63–81.
doi:, accessed 2 November 2020.

P.H. Jos and M.E. Tompkins, 2009. “Keeping it public: Defending public service values in a customer service age,” Public Administration Review, volume 69, number 6, pp. 1,077–1,086.
doi:, accessed 2 November 2020.

K. Karunasena and H. Deng, 2009. “A conceptual framework for evaluating the public value of e-government: A case study from Sri Lanka,” ACIS 2009 Proceedings, pp. 1,002–1,012, and at, accessed 2 November 2020.

I. Kearns, 2004. “Public value and e-government,” Institute for Public Policy Research, at, accessed 2 November 2020.

G. Kelly, G. Mulgan, and S. Muers, 2002. “Creating public value: An analytical framework for public service reform,” Strategy Unit, U.K. Cabinet Office, at, accessed 2 November 2020.

K. Kernaghan, 2003. “Integrating values into public service: The values statement as centerpiece,” Public Administration Review, volume 63, number 6, pp. 711–719.
doi:, accessed 2 November 2020.

T. Kowatsch, M. Nißen, C.-H.I. Shih, D. Rüegger, D. Volland, A. Filler, F. Künzler, F. Barata, D. Büchter, B. Brogle, K. Heldt, P. Gindrat, N. Farpour-Lambert, and D. l’Allemand, 2017. “Text-based healthcare chatbots supporting patient and health professional teams: Preliminary results of a randomized controlled trial on childhood obesity,” Persuasive Embodied Agents for Behavior Change (PEACH2017) Workshop, at, accessed 2 November 2020.

D. Kuenzig, 2018. “Government chatbots: Five steps to getting it right,” Govtech Review (5 April), at, accessed 1 March 2020.

S. Laumer, C. Maier, and F.T. Gubler, 2019. “Chatbot acceptance in healthcare: Explaining user adoption of conversational agents for disease diagnosis,” Proceedings of the 27th European Conference on Information Systems (ECIS), at, accessed 2 November 2020.

T.A. Maniou and A. Veglis, 2020. “Employing a chatbot for news dissemination during crisis: Design, implementation and evaluation,” Future Internet, volume 12, number 7, 109.
doi:, accessed 2 November 2020.

R.C. Mayer, J.H. Davis, and F.D. Schoorman, 1995. “An integrative model of organizational trust,” Academy of Management Review, volume 20, number 3, pp. 709–734.
doi:, accessed 2 November 2020.

H. Mehr, 2017. “Artificial intelligence for citizen services and government,” at, accessed 12 September 2019.

T. Meynhardt, 2009. “Public value inside: What is public value creation?” International Journal of Public Administration, volume 32, numbers 3–4, pp. 192–219.
doi:, accessed 2 November 2020.

T. Meynhardt and S. Bartholomes, 2011. “(De)composing public value: In search of basic dimensions and common ground,” International Public Management Journal, volume 14, number 3, pp. 284–308.
doi:, accessed 2 November 2020.

S. Miessner, 2019a. “Meet Kamu: Co-designing a chatbot for immigrants,” Service Gazette (15 August), at,accessed 21 January 2020.

S. Miessner, 2019b. “Starting up smoothly: Experiment evaluation” (27 August), at, accessed 26 February 2020.

J.L.Z. Montenegro, C.A. da Costa, and R. da Rosa Righi, 2019. “Survey of conversational agents in health,” Expert Systems with Applications, volume 129, pp. 56–67.
doi:, accessed 2 November 2020.

M.H. Moore, 1995. Creating public value: Strategic management in government. Cambridge, Mass.: Harvard University Press.

J. Morrison, 2014. “The social license,” In: J. Morrison. The social license: How to keep your organization legitimate. London: Palgrave Macmillan, pp. 12–28.
doi:, accessed 2 November 2020.

MTR, 2018. “Chatbot,” at, accessed 18 June 2019.

T. Nadarzynski, O. Miles, A. Cowie, and D. Ridge, 2019. “Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study,” Digital Health (January).
doi:, accessed 2 November 2020.

L. Ni, C. Lu, N. Liu, and J. Liu, 2017. “MANDY: Towards a smart primary care chatbot application,” In: J. Chen T. Theeramunkong, T. Supnithi, and X. Tang (editors). Knowledge and systems sciences. Singapore: Springer, pp. 38–52.
doi:, accessed 2 November 2020.

A. Nili, A. Barros, and M. Tate, 2019. “The public sector can teach us a lot about digitizing customer service,” MIT Sloan Management Review, volume 60, number 2, pp. 84–87, and at, accessed 2 November 2020.

C. O’Neil, 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown.

Y. Petriv, R. Erlenheim, V. Tsap, I. Pappel, and D. Draheim, 2020. “Designing effective chatbot solutions for the public sector: A case study from Ukraine,” In: A. Chugunov, I. Khodachek, Y. Misnikov, and D. Trutnev (editors). Electronic governance and open society: Challenges in Eurasia. Cham, Switzerland: Springer, pp. 320–335.
doi:, accessed 2 November 2020.

M. Prebble, 2015. “Public value and limits to collaboration,” International Journal of Public Administration, volume 38, number 7, pp. 473–485.
doi:, accessed 2 November 2020.

L. Price, 2018. “GYANT: AI enabled triage service,” at, accessed 19 February 2020.

A. Probyn, 2017. “NDIS’ virtual assistant Nadia, voiced by Cate Blanchett, stalls after recent census, robo-debt bungles,” ABC News (21 September), at, accessed 12 December 2018.

PwC EU Services, 2019. “Architecture for public service chatbots,” European Commission, Directorate-General for Informatics, at, accessed 5 January 2020.

Queensland Civil and Administrative Tribunal (QCAT), 2019a. “2018–19 annual report,” at, accessed 4 January 2020.

Queensland Civil and Administrative Tribunal (QCAT), 2019b. “How do I get QCAT in Brisbane,” at, accessed 12 December 2019.

Queensland Department of Justice and Attorney General, 2019. “2018–2019 annual report,” at, accessed 2 November 2020.

Queensland Government, 2019. “Resolving tree and fence disputes,” at, accessed 12 December 2019.

Z. Radnor, S.P. Osborne, T. Kinder, and J. Mutton, 2013. “Operationalizing co-production in public services delivery: The contribution of service blueprinting,” Public Management Review, volume 16, number 3, pp. 402–423.
doi:, accessed 2 November 2020.

Y. Redrup, 2016. “ATO deploys Alex a talking ‘Siri for tax’ digital assistant you can talk to,” Financial Review (6 December), at, accessed 18 April 2019.

M. Riikkinen, H. Saarijärvi, P. Sarlin, and I. Lähteenmäki, 2018. “Using artificial intelligence to create value in insurance,” International Journal of Bank Marketing, volume 36, number 6, pp. 1,145–1,168.
doi:, accessed 2 November 2020.

A. Sanchez and L.S. Pacheco, 2016. “Emma: Friendly presence and innovative USCIS resource available 24/7,” (1 September), at, accessed 2 November 2020.

A. Sangroya, P. Saini, and C. Anantaram, 2017. “Chatbot as an intermediary between a customer and the customer care ecosystem,” MEDES ’17: Proceedings of the Ninth International Conference on Management of Digital EcoSystems, pp. 128–133.
doi:, accessed 2 November 2020.

S. Schanke, G. Burtch, and G. Ray, 2020. “Estimating the impact of ‘humanizing’ customer service chatbots,” at, accessed 2 November 2020.

A. Schlesinger, K.P. O’Hara, and A.S. Taylor, 2018. “Let’s talk about race: Identity, chatbots, and AI,” PCHI ’18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, paper number 315.
doi:, accessed 2 November 2020.

F.D. Schoorman, R.C. Mayer, and J.H. Davis, 2007. “An integrative model of organizational trust: Past, present, and future,” Academy of Management Review, volume 32, number 2, pp. 344–354.
doi:, accessed 2 November 2020.

J. Seering, M. Luria, G. Kaufman, and J. Hammer, 2019. “Beyond dyadic interactions: Considering chatbots as community members,” CHI ’19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, paper number 450.
doi:, accessed 2 November 2020.

G. Stoker, 2006. “Public value management: A new narrative for networked governance?” American Review of Public Administration, volume 36, number 1, pp. 41–57.
doi:, accessed 2 November 2020.

S. Suganuma, D. Sakamoto, and H. Shimoyama, 2018. “An embodied conversational agent for unguided Internet-based cognitive behavior therapy in preventative mental health: Feasibility and acceptability pilot trial,” JMIR Mental Health, volume 5, number 3, e10454.
doi:, accessed 2 November 2020.

M. Sutton, 2019. “DEWA’s Rammas chatbot wins BIG innovation award,” (13 March), at, accessed 25 February 2020.

M. Sutton, 2018. “DEWA announces new version of Rammas chatbot,” (10 March), at, accessed 3 March 2019.

M. Sweeney and E. Davis, forthcoming. “Alexa, are you listening? An exploration of smart voice assistant use and privacy in libraries,” Information Technology and Libraries, version at, accessed 2 November 2020.

C. Talbot, 2011. “Paradoxes and prospects of ‘public value’,” Public Money & Management, volume 31, number 1, pp. 27–34.
doi:, accessed 2 November 2020.

A,D, Thierer, A. Castillo O’Sullivan, and R. Russell, 2017. “Artificial intelligence and public policy,” Mercatus Research, Mercatus Center at George Mason University, at, accessed 2 November 2020.

J.R. Thompson, 2016. “Public values in context: A longitudinal analysis of the U.S. civil service,” International Journal of Public Administration, volume 39, number 1, pp. 15–25.
doi:, accessed 2 November 2020.

Travel for London (TfL), 2017. “TfL launches new social media ‘TravelBot’” (14 June), at, accessed 4 April 2019.

T.R. Tyler, 2003. “Procedural justice, legitimacy, and the effective rule of law,” Crime and Justice, volume 30, pp. 283–357.
doi:, accessed 2 November 2020.

F.A. Valério, T.G. Guimarães, R.O. Prates, and H. Candello, 2017. “Here’s what I can do: Chatbots’ strategies to convey their features to users,” IHC 2017: Proceedings of the XVI Brazilian Symposium on Human Factors in Computing Systems, article number 28.
doi:, accessed 2 November 2020.

D. Valle-Cruz, E.A. Ruvalcaba-Gomez, R. Sandoval-Almazan, and J. Ignacio Criado, 2019. “A review of artificial intelligence in government and its potential from a public policy perspective,” dg.o 2019: Proceedings of the 20th Annual International Conference on Digital Government Research, pp. 91–99.
doi:, accessed 2 November 2020.

J. van Doorn, M. Mende, S.M. Noble, J. Hulland, A.L. Ostrom, D. Grewal, and J.A. Petersen, 2017. “Domo arigato Mr. Roboto: Emergence of automated social presence in organizational frontlines and customers’ service experiences,” Journal of Service Research, volume 20, number 1, pp. 43–58.
doi:, accessed 2 November 2020.

C. van Noordt and G. Misuraca, 2019. “New wine in old bottles: Chatbots in government,” In: P. Panagiotopoulos, N. Edelmann, O. Glassey, G. Misuraca, P. Parycek, T. Lampoltshammer, and B. Re (editors). Electronic participation. Lecture Notes in Computer Science, volume 11686. Cham, Switzerland: Springer, pp. 49–59.
doi:, accessed 2 November 2020.

M. Venkatesh, 2018. “Artificial intelligence vs. machine learning vs. deep learning,” Data Science Central (7 May), at, accessed 2 November 2020.

J. Walker, 2017. “Azure government chatbot reduces email By 50%” (5 May), at, accessed 24 February 2020.

S. Webb and J.D. Harden, 2017. “Houston ISD settles with union over controversial teacher evaluations” Houston Chronicle (12 October), at, accessed 15 November 2019.

WienBot, 2017. “The chatbot of the City of Vienna,” at, accessed 24 February 2020.

H.J. Wilson, P. Daugherty, and N. Bianzino, 2017. “The jobs that artificial intelligence will create,” MIT Sloan Management Review, volume 58, number 4, at, accessed 2 November 2020.

A. Winfield, 2019. “Ethical standards in robotics and AI,” Nature Electronics, volume 2, number 2, pp. 46–48.
doi:, accessed 2 November 2020.

B.W. Wirtz, J.C. Weyerer, and C. Geyer, 2019. “Artificial intelligence and the public sector — Applications and challenges,” International Journal of Public Administration, volume 42, number 7, pp. 596–615.
doi:, accessed 2 November 2020.

E. Witesman and L. Walters, 2014. “Public service values: A new approach to the study of motivation in the public sphere,” Public Administration, volume 92, number 2, pp. 375–405.
doi:, accessed 2 November 2020.

D. Zumstein and S. Hundertmark, 2017. “Chatbots — An interactive technology for personalized communication, transactions and services,” IADIS International Journal on WWW/Internet, volume 15, number 1, pp. 96–109.




List of the chatbots reviewed.
Chatbot nameDeploying public agency Service offeredReferences
MTR Mobile chatbotMass Transit Railway, Hong KongAssists users to plan their travel with public transport.MTR, 2018
AlexAustralian Tax Office, AustraliaAssists users to conveniently access information on issues related to taxation, property rights, income and deduction, filing returns.CX Central, 2016
EMMAU.S. Citizenship and Immigration ServicesAssists visitors in finding relevant information pertaining to U.S. immigration services.Sanchez and Pacheco, 2016
TravelBotTransport for London, U.K.Provides users with information such as bus arrivals, route status, bus, and train service updates and maps.TfL, 2017
NadiaDepartment of Human Services, AustraliaDesigned for public welfare service system with intentions to interact with participants of the National Disability Insurance Scheme.Probyn, 2017
Real-time Intelligent Transport Assistant (RITA)Transport of New South Wales, AustraliaProvides users with information about public transport system delays and disruptions in response to the user’s query.Graham, 2018
KamuFinnish Immigration Service, FinlandAssists users with information related to residents permit to live in Finland.Finnish Immigration Service, 2018
AmeliaEnfield Council, U.K.Provides users with timely responses to general requests made to the council and in the process freeing up council employees to focus on more complex citizens’ issues.Enfield Council, 2016
MANDIQueensland Government, AustraliaDirects users to information that enables them to resolve a wide range of neighbourhood disputes in a safe and fair manner.Queensland Government, 2019
SANDIQueensland Civil and Administrative Tribunal, AustraliaDirects users to information that assists them with resolving tree and fence disputes at any time of the day.QCAT, 2019a
Ada Health GmbHBerlin, GermanyProvide medical diagnostic suggestions to patients based on the information provided by the users.(Laumer, et al., 2019
RammasDubai Electricity and Water AuthorityHandles user enquiries and provides responses using data stored within the public utilities system.Sutton, 2018
Babylon GP at handNational Health Services, U.K.Provides users with medical advice based on diagnosis of the users’ responses and pairs the users with expert medical professionals when necessary.Blanchard, 2019
PatrekFinnish Immigration Service, FinlandAdvices users on how to setup companies in Finland.Finnish Immigration Service, 2018
VerobotFinnish Immigration Service, FinlandAdvices users about business and work-related taxes in Finland.Finnish Immigration Service, 2018



Editorial history

Received 26 June 2020; revised 18 August 2020; accepted 19 August 2020.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Chatbot-mediated public service delivery: A public service value-based framework
by Tendai Makasi, Alireza Nili, Kevin C. Desouza, and Mary Tate.
First Monday, Volume 25, Number 12 - 7 December 2020