Trust, risk and eID: Exploring public perceptions of digital identity systems
First Monday

Trust, risk and eID: Exploring public perceptions of digital identity systems by Ruth Halperin and James Backhouse



Abstract
This paper offers an account of the perceptions of citizens from the U.K. and Germany on the subject of interoperable electronic identity (eID) systems. It draws on a grounded empirical analysis identifying the risks that citizens associate with online identity management systems: information risk, economic risk and socio–political risk. Our study suggests that the perceived risks derive from, and are amplified by, low trust beliefs in public authorities responsible for identity management. Three dimensions of trustworthiness in government were found — competence, integrity and benevolence — constructed from negative past experiences of IT failures, function creep, and political history of oppression. To theorise our findings we propose a model depicting the relationships between trust, risk and behavioral intentions. Practical implications deriving from the study concern trust enhancement and risk reduction strategies aimed at winning public acceptance of eID systems in electronic services.

Contents

Introduction
Methodology
Findings
Discussion
Conclusion and future research

 


 

Introduction

As the move towards eGovernment gathers pace, research will begin to examine what digital citizenship consists of, what its risks are, and how they might be managed in the new digital era. An important characteristic of eGovernment lies in its dependence on technologies for managing identity (Aichholzer and Strauß, 2010). Web–based identity management systems (IdMS) are now essential for eGovernment applications with large–scale sharing of personal data and requiring the identification and authentication of individual citizens (Lips, 2010; Saxby, 2006).

In 2005 the eEurope Action Plan called on the European Commission to issue an agreed interoperability framework [1] to support the delivery of pan–European eGovernment services to citizens and enterprises (IDABC, 2005; Schnittger, 2005). Envisaging E–services on a larger scale than before, this plan aimed to harmonise European tax, social security systems, educational systems, jurisdiction for divorce and family law, driving risks and benefit and welfare regimes (Kinder, 2003; Threlfall, 2003). The i2010 Strategic Plan highlighted interoperability as one of the four main challenges for the creation of a single European information space and essential for ICT–enabled public services. As part of this plan, the Interoperability Solution for Public Administration (ISA) [2] proposed a pragmatic approach to identity management [3]. The EU Digital Agenda 2020 defined interoperability as one of its key initiatives [4].

Although the interoperability agenda for Europe holds considerable power to transform and improve eGovernment systems, a number of challenges emerge. First, there are technical challenges relating to data homogeneity and system interoperability for proper and efficient metadata exchange (Prokopiadou, et al., 2004; Recordon and Reed, 2006). Second, challenges within the policy realm such as in the creation, communication and diffusion of commonly accepted standards [5] (Chen and Doumeingts, 2003; Otjacques, et al., 2007). Third, challenges concerning politics, culture and behaviour (Scholl, 2005). A crucial aspect of the latter involves citizens and their perceptions in relation to the adoption of interoperable IdMS for public service. As part of the planning for such systems, policy–makers will have to consider the perceptions of the citizen (Backhouse and Halperin, 2009; Ray, et al., 2011). The notion of citizen–centricity draws attention to this aspect of the new systems that may vitally affect whether or not they eventually win public acceptance.

The citizen–centric approach in reform efforts forms an important part of eGovernment (Lips, 2007; Reddick, 2010) giving citizens an improved and central position in their service relationship with their government and taking into account their needs and aligning the new digital public service systems more closely to their perspectives and aspirations. Jaeger and Bertot (2010) assert that citizen–centric eGovernment contrasts directly with eGovernment seen as just a means of reducing costs and of providing the same service on a different platform. A service that does not meet the needs of the citizen is a service that will not be used widely and will waste taxpayers’ money. Winning public acceptance is central to the citizen–centric approach and essential in ensuring adoption and successful implementation of the new systems.

In Europe, citizen–centricity represents a key policy aim and one of three themes in the U.K. Transformational Government Strategy (2006) [6]. The European Commission continues to emphasise its importance and has added it as a new indicator to the eGovernment benchmarking process [7]. Despite this, a European Union report advises that in respect of inclusive and citizen–centric eGovernment ‘much effort is still highly fragmented in terms of both policy and practice ...’ [8] Verdegem and Hauttekeete (2010) argue that the low uptake of eGovernment by citizens has been caused by an overly government–centric and technologically deterministic way of working, which has resulted in the creation of e–services that neglect the user.

Research is needed therefore on the way in which eGovernment connects with citizens, given that they constitute the most significant user group for its information and services (Jaeger and Bertot, 2010). The LSE Identity Project (2010) relates citizen–centricity with public trust. Critically analysing the eID plans in the U.K., it contends that future identity systems should be based on public trust and user demand rather than on enforcement by criminal and civil sanctions. Public trust could benefit from the creation of more flexible citizen–centred systems [9]. This study seeks to contribute to the development of the citizen–centric approach by providing insight and understanding regarding the perspectives of citizens towards interoperable identity management — a strategic information requirement that underpins all eGovernment. However, there is little coverage of eID services perceptions (Lusoli and Miltgen, 2009; Eurobarometer 359, 2011 [10]). A better understanding of the perceptions of citizens should open the way for better design and implementation, especially given that these systems are still in the early stages of their development (Kubicek and Noack, 2010).

 

++++++++++

Methodology

The methodology adopted in this study draws from grounded theory (GT) (Glaser and Strauss, 1967; Martin and Turner, 1986) and seeks to develop accounts that are grounded in data. This exploratory approach seemed particularly appropriate given that little is known about identity management systems or specifically eID in the non–technical literature (Seltsikas and O’Keefe, 2010). We adopted the analytical technique of open coding (Strauss and Corbin, 1990). The inductive and contextual characteristics of GT fit with the interpretive orientation of this research. The focus was on developing a context–based description, with the aim of generating an account of citizens’ perceptions regarding public sector identity management systems.

Data collection

We collected the data as part of a Web survey exploring the perspectives of European citizens towards the new interoperable systems planned by the European Union. The research aim was: ... to assess citizens attitudes towards the idea of a future electronic ID card to be used across all European countries, and a brief description was provided of the proposed EU scheme and of the introduction of a Europe–wide eID card which would allow shared use for all governmental services across all European countries.

Designed to generate qualitative data, the survey contained an open–ended invitation to respondents to freely comment on any aspect of interoperable eIDs, generating free text data. Statements varied both in terms of length and level of elaboration, ranging from a few words or a phrase, e.g., ‘1984’ to a few hundred words. On average, responses contained about 60 words, typically structured in two or three sentences.

Citizens were also asked to respond to demographic questions based on those asked in the Eurobarometer survey [11]. The results indicated that the survey respondents are dominated by a population with a mean of 34 years of age, a minimum age of 15 and maximum age of 77. In terms of gender, male respondents were heavily over–represented in the survey (female 17 percent; male 83 percent).

Finally, on the question of political views, results indicated a mean of 4.53 on a 10 number scale with 1 indicating farthest left and 10 farthest right.

We requested that only EU citizens respond to the survey, which was promoted by all 24 participating organisations of the FIDIS consortium [12] within which this research was carried out. Although responses came from several EU countries, the national representation of the data used in the present study was restricted to just two countries, Germany (N=360) and the U.K. (N=377). While the response rate in these countries lent itself to rigorous content analysis, the number of responses from other countries was significantly smaller and hence these were excluded from the analysis so as to avoid national bias.

Although the dataset provided a wealth of material about citizen perceptions, we recognise the limitation of Web surveys as self–selected samples. We could have opted for in–depth interviews but this method has its own drawbacks. Interviews would not have provided the diverse and anonymous responses obtained through the Web survey. Another issue with the Web survey is the potential bias that those with strong opinions on the topic were those most eager to respond. A further limitation involves a gender bias, as indicated above, whereby male respondents are heavily over–represented in the survey. However, the validity of qualitative research stems not from its statistical basis but from the plausibility and the cogency of the logical reasoning used in describing the results from the cases, and in drawing conclusions from them (Ågerfalk, 2004). The study was not designed to provide statistical generalisability (Yin, 1984), but it may properly lay claim to analytical generalisability, in particular to the common type of generalising from data to descriptions (Lee and Baskerville, 2003). Our aim was to develop understanding of the reasoning behind the many strongly–held views that have circulated on the issue of eID.

Data analysis

The first stage in preparing the data for the study involved translating of all data into English so as to allow a standardised analysis. Making sense of the large body of unstructured text required the creation of a grounded analysis framework. This technique uses a form of content analysis where the data are read and categorised into concepts that are suggested by the data themselves rather than imposed from outside (Agar, 1980). Units for analysis define units of meaning (Henri, 1992) as each response is partitioned into text segments representing ideas (one or more) in the text. This method of open coding (Strauss and Corbin, 1990) relies on an analytical technique of identifying possible categories, their properties and dimensions (Kelle, 2007). As we examined the data, we organised the emerging concepts according to recurring theme. These themes become prime candidates for a set of stable and common categories, which link a number of associated concepts. This qualitative method of data analysis is known as axial coding (Strauss and Corbin, 1990) and relies on a synthetic technique of making connections between sub–categories to construct a more comprehensive scheme. The goal is to determine a set of categories and concepts that covers as much of the data as possible (Orlikowski, 1993).

The findings presented in the following section and the resulting empirically grounded model required a lengthy iterative analysis process, which broadly has two major phases. The first phase involved categorising the different perceptions expressed in the data under three broad judgment types towards interoperable eID: positive, negative or ambivalent. The sub–categories that emerged for each judgment type captured the substance and nuances associated with prevailing perceptions. As we completed the coding, however, it became clear that the vast majority of the responses consisted of negative judgments (80 percent). The remaining responses were categorised as either ambivalent (13 percent) or positive (seven percent). In the second phase, we focused on these critical perceptions, not just because they were widely held by respondents but also because they signalled potential resistance. We carried out this second phase “scaling up” (Urquhart, et al., 2010) and identifying three overarching and sequential themes: the concerns citizens associated with IdMS, the reasoning underlying these concerns and the declared intentions to act upon concerns.

Further content analysis suggested that the concerns expressed (e.g., ... it will be abused ... . ...There’ll be identity theft ...it will be hacked) could be understood and conceptualised in terms of risk perceptions as the responses speak to the potential harm IdMS might bring about. But unlike dangers to which people are exposed but cannot control (Luhmann, 1993), the risks associated with IdMS were perceived as a consequence of decision and action. Analysing the reasoning underlying perceived risks revealed trust as the predominant issue: ... Governments cannot be trusted to maintain identity information on the citizen’s behalf ... The main problem is not the data that is stored on the ID–card, but the lack of trust in the authorities that handle the data ... . The abundance of similar statements and the way respondents justified them led us to focus on trust–related beliefs in subsequent analysis.

As often found in adopting the approach of GT, our data analysis and theorising involved both induction and abduction (Reichertz, 2007). Abduction involves imaginative interpretation while at the same time forcing the researcher repeatedly to seek accountability from the empirical data (Bryant and Charmaz, 2007). Specifically in this study, induction was predominant in the initial phases of the data analysis process, and when moving from data to descriptions (Akkermans and Vennix, 1997). Abduction took on more importance in the subsequent iterations and in theorising, which involved reconceptualising descriptive categories and abstraction, and in particular the development of a grounded model depicting relationships among the core categories.

This analysis demonstrates the role of theoretical sensitivity in GT (Glaser, 1978). Glaser and Strauss (1967) emphasise the need to avoid preconceptions or forcing of existing concepts or theory, so allowing concepts to emerge from the data. Strauss and Corbin (1990) explicitly acknowledge that the discovery of theoretical categories during the coding process needs to draw on “existing stocks of knowledge” [13]. Butler and O’Reilly stress the notion of pre–understanding in GT research, reminding us that pre–understanding will always be employed, and goes on to suggesting that ‘... researchers should not ... enter the field without a solid grounding in the literature or a priori theory as pre–understanding’ [14].

Both the two notions of risk and of trust were used in the analysis as sensitising concepts. From the risk literature in particular we adopt the crucial distinction between risk as a functional relationship between probabilities and adverse effects, and risk perception as the processing of information about potential harmful events or activities and the formation of judgment about the seriousness, likelihood and acceptability of the respective event or activity (Renn, 2008; Lupton, 1999). Our analysis focused on the intuitive, subjective judgment of potential harm, and on the willingness of the public to accept the risk, or otherwise manage it. We also adopted common security principles (Dhillon, 2007) to assist in the disentanglement of grounded risk categories.

Theoretical sensitivity and theoretical integration (Urquhart, et al., 2010) also figure strongly in the analysis of trust beliefs. While the focus on trust issues is entirely data driven, we draw on the literature to reconceptualise grounded categories. As we explain in more detail below, we employed constructs borrowed from existing trust models and typologies (Mayer, et al., 1995; McKnight, et al., 2002). A fundamental concept we draw on is that of institutional trust (Zucker, 1986; Kramer, 1999; Turow and Hennessy, 2007), in contrast to dyadic interpersonal trust (Hudson, 2006; Pavlou and Gefen, 2004). Our study analyses the trust of citizens in their government — in public authorities or institutions responsible for identity management, revealing the conditions within which trusting beliefs are constructed.

 

++++++++++

Findings

The risk perceptions of citizens were mapped into three discrete types of risk: information, economic and socio–political. We focus on the underlying perceived risks in which trust issues played a key role and on the trustworthiness beliefs, and analyse justification patterns in the assessments by citizens. We illustrate the declared intentions to act upon perceptions. Drawing the findings together in the concluding part of this section, we offer a grounded model that associates trust beliefs, risk perceptions and behavioural intentions around interoperable identity management systems in eGovernment.

Perceived risks of interoperable IdMS

Concerns emerged for the risks associated with new EU-wide eID systems. Our analysis and interpretation of the many statements of concern led us to identify discrete areas of risk onto which we mapped and coded observed perceptions. These are described and illustrated in the following sections, using the data which created each category.

Information risk

Information risk represents a broad theme in the analysis of risk perceptions — those risks related directly to information, information handling, and the IT implicit in identity management. Analysing further the perceptions associated with this theme, to assist in the sub–categorisation process in subsequent iterations we draw on the three core principles of information security [15] as a simple and well–known taxonomy of such risks. We illustrate the results below.

Data integrity. Data integrity refers to the match of the record stored in a system with the real characteristics of the data subject: statements expressing concern regarding the accuracy of personal data held in eID systems, e.g., ... If there are wrong entries one is economically dead ... (U.K.); or

... data sometimes become obsolete and in some cases I don’t have the possibility to update it — because I don’t know, where all over the world data about me is available (Germany).

Hence, if data is used to determine eligibility for services, then inaccurate data connote a risk of benefit loss, including financial loss. When inaccurate data is used in a number of inter–related systems — as might be expected in a context of interoperability — the error rate will increase, as well as the risks resulting from any one single error.

Availability. We refer to the availability of the technological systems used to store and process information, the controls to protect it and the functioning of the communication channels used to access it. Under availability we coded statements referring to the suitability and reliability of IdMS–related technologies such as RFID (mainly in Germany): ... not if there is an RFID–Chip on it ... This IS going to be abused, if not by the authorities themselves, then by criminals within these organisations (original emphasis). And biometrics (mainly in U.K.): if biometric data is stored governmental authorities will pay more attention to these attributes and build up prejudices based upon this. From availability–related statements emerge risk perceptions concerning criminal abuse, discrimination and prejudice.

Confidentiality. Confidentiality refers to issues of access to personal information: its limits and authorisation: ... Due to the large number of people that have a right to access, the danger of abuse of data is too big (Germany)

... If my data finally are exchanged without my knowledge I am not able to reconstruct who possesses what data about me at all. The problem consists of the fact that I possibly do not want, that A knows the data that I passed to B (Germany).
... Through the implementation of an electronic identity card and the exchange of data between enterprises and authorities as well as between authorities and authorities privacy vanishes completely (U.K.).

Loss of privacy emerges as the main risk associated with confidentiality, but also included were second order risks, such as abuse of data by unauthorised parties.

Other than information risk, two distinct areas of risk perceptions are further identified, which we refer to as economic risk and socio–political risk.

Perceived economic risk

In contrast to a technical objectivist notion of economic risk, economic risk is used here to represent intuitive perceptions of cost–benefit assessments as seen by citizens:

... already the electronic passport hasn’t been a good idea but an electronic identity card — for which the citizen probably will have to pay a high price again!!! — is absolutely wrong! (Germany).

Our data showed that citizens felt that new IdMS might put public money at risk. If the system eventually fails: ... I feel the authorities will fail to deliver a secure, working system. It will be a monumental waste (U.K.), or leads to overbearing bureaucracy: ... I am concerned about the risk from inflated red tape (Germany).

Socio–political risk

The third area of perceived risk to emerge from the grounded analysis concerns undesirable socio–political implications. The main issue raised is that interoperable IdMS might induce a detrimental shift in the balance of power between government and citizens:

While I have no great problems with the idea of a proof of identity (e.g. Passport), the eID schemes proposed will shift the balance of control of identity from the Citizen to the State (U.K.).

The use of IdMS risks strengthening the hand of government whilst weakening that of citizens, as it creates nearly unlimited opportunities for surveillance and governmental control (U.K.). ... The storage of personal data is an instrument for power of the Community (EC) against its citizens ... . (Germany)

In respect of the reasoning underlying the perceived risks, our data clearly direct attention to the issue of trust, more specifically, to the role of low trust, even mistrust, in generating and amplifying risk perceptions. The following section presents findings on the trust–related perceptions held by citizens.

Trusting beliefs

... I believe the authorities will attempt to be honest and secure but ultimately will be unsuccessful in maintaining the confidentiality of my data (U.K.)

This quote from a U.K. respondent is chosen in order to illustrate a fundamental distinction concerning issues of trust in public authorities. Its focus on honesty in the first part of the statement implies an assessment of the integrity of the authorities. The second part of the statement addresses the inability to deliver a proper system, so questioning their competence. These twin themes of competence and integrity emerged from the analysis as distinctive and independent categories. In the response quoted above, the integrity of public authorities is assessed positively while the judgment for competence is negative.

Many responses articulated negative perceptions of trust in both the competence and integrity of public authorities:

Governments cannot be trusted to maintain identity information on the citizen’s behalf, and once such information is under the control of governments, its abuse will necessarily follow – either by government itself, or by criminals who infiltrate government systems (Germany)

In this example the suggestion is that the potential for abuse of personal data under public custodianship is inevitable — ‘either by government itself’, thus implying lack of integrity on the part of the government, ‘or by criminals who infiltrate government systems —’ suggesting the inability of the government to secure the data from criminal attack.

The distinction between competence and integrity was therefore apparent in the grounded analysis. Similar dimensions of trust beliefs are found in the literature. In their integrated model of organisational trust, Mayer, et al. (1995) propose the notion of ability, i.e., the ability of the trustee to do what the truster needs. Renn suggests that perceived competence is a component of institutional trust, defined as the degree of technical expertise in meeting an institutional mandate [16]. Clustering types of trusting beliefs, McKnight, et al. (2002) identify constructs related to competence, which include expertness, dynamism, capability and good judgment. Studying trust relationship in the context of B2C eCommerce, the authors configure competence as an attribute of trustworthiness. The notion of integrity (Mayer, et al., 1995), as well as similar concepts such as fairness, truthfulness honesty and sincerity, is found in previous research on trust–related perceptions (cf., Wang and Vassileva, 2003). The typology offered by McKnight, et al. (2002) classifies a set of integrity–related constructs including credibility, morality, and dependability. This study confirms the importance of these two constructs in perceptions of trust. We now present findings relating to the themes of competence and integrity.

Trusting the competence of public institutions

... I am not against ID cards in principle, but have grave doubts about the competence of those running the system. Human error is probably a bigger risk than IT (U.K.)

The theme of competence arose repeatedly in responses that questioned the ability of the state to secure and manage personal data. Negative judgments focused on the technical proficiency of the public authorities. References to past experiences substantiated the perception of public authority incompetence:

... We also already have all the evidence we need to know that massive governmental IT projects are massive disasters, since every single one in the past twenty years has been (U.K.)
Unfortunately the authorities have shown in the past their incompetence in realizing IT–projects (Germany)

Our findings point to negative perceptions regarding the ability of governments to operate a secure, interoperable IdMS. The responses confirm a low level of trust in public authorities, and a concern that they will ultimately ‘fail to deliver a working system’. The perceived inability of the government to manage large–scale IT projects arises from a reputation for failure.

Perceived integrity of public institutions

In our initial coding scheme integrity encompassed all statements referring to morality, implying bad or wrong behaviour. Going beyond the general judgments of integrity however, a closer analysis revealed that the main issue of integrity revolved around questions of the truthfulness of state institutions in respect of their handling of personal data. A key concern lay in the suspicion of possible opportunistic behaviour on the part of public institutions:

States cannot be trusted to restrict their use of citizenship data to what they promised in different circumstances (U.K.)
I don’t have the confidence that authorities can resist the temptation to use all available information to solve their acute problems e.g. ‘terrorism’ or crime (Germany)

The responses voice concern for mission (or function) creep, in which information collected for one purpose is used for other purposes that data subjects have not approved of. Some respondents dreaded certain ‘third parties’ gaining access to identity information:

I am afraid that personal biometric data are combined with different databases and will be used for other purposes than the one originally determined. These “other” uses are for example — criminal prosecution, marketing, health insurance (original punctuation) (Germany)

This respondent proposes a number of potential “other uses” of personal data, including law enforcement — personal data shared between different public sector authorities, and marketing and insurance — personal data passed from government to private sector organisations.

The pattern of relying on past events as a way of justifying lack of trust was observed once again in the context of integrity perceptions. Just as they had done in respect of competence, respondents rationalised their negative judgment of government integrity in terms of ‘real life’ examples of mission creep cases:

My basic fear is that collected data will be used afterwards by politicians/authorities/enterprises etc. for other purposes than the originally declared ones. See the deployment of the toll system for — alleged — law enforcement purposes (Germany)

A case frequently cited in German responses was the German Autobahn toll–collection data usage for law enforcement purposes. Although this data has not been used in this manner to date [17], the mere proposal to share it with law enforcement agencies suggested to citizens that mission creep of this kind is, at the very least, always an option.

Another often cited case was the transfer of EU citizens’ passenger data to the U.S. government:

EU authorities, inter alia, with the transmission of flight passenger data to the U.S. already shown clearly that data protection doesn’t play an important role. Why should I now rely on the same institutions? (U.K.)

Respondents interpreted the transfer of passenger data as a violation of privacy and as evidence of mission creep and one had a clear opinion about how to deal with it: Data sharing between countries should be an “opt in” scheme, not “opt out”, and people should be informed fully of the implications of either choice (original punctuation).

Challenging underlying motives: Good or ill will?

It’s not about easy access for citizens to authorities. The reason for ID systems is to establish surveillance measures. And this is communicated to me as an advantage for the citizen?? (original punctuation) (Germany)

This response encapsulates the third grounded theme: the motives behind government eID initiatives. Comments such as electronic ID–cards are targeted specifically to record and analyze individuals and all their actions, or the recurring use of the metaphors ‘1984’ (mostly by British respondents) and ‘Glass Citizens’ (mostly by German respondents) were all taken to represent a negative perception of intrusive government surveillance. Attempts to conceptualise this emerging category led us initially to use descriptors such as a priori anti–ID. On further analysis, we favoured terms such as overpowering state to refer to this emerging theme as our preliminary descriptors were less useful for interpreting responses within a trust relationship context. This understanding came later on as we explored the congruence between this data–driven theme and the theoretical concept of benevolence (Mayer, et al., 1995; Schoorman, et al., 2007). Benevolence refers to the expectation both of goodwill and benign intent from a trusted party (Yamagishi and Yamagishi, 1994). It is a construct found in a number of trust models, often seen as a trustworthiness attribute, or a type of trust belief (McKnight, et al., 2002). The concept of benevolence is akin to that of integrity, in that both concepts reflect ethical traits. Mayer, et al. (1995) have suggested that benevolence refers to trustee motives and is based on altruism, whereas integrity refers to keeping commitments and not lying; traits that may be manifested for utilitarian rather than altruistic reasons. Drawing on this delineation, we found it constructive to re-conceptualise the grounded theme associated with perceptions of government motives using the notion of benevolence. In this way, we were able to generalise and reinterpret our findings in terms of trust perceptions.

As in the previous categories of trust perceptions, respondents constantly pointed to past events to justify their beliefs:

... Such systems/approaches have been implemented in Germany between 1933 and 1945 with deadly outcome for some of the citizens (Germany)
... ID systems are for Nazi Germany and Soviet Russia and other Police States and Dictatorships. They are completely incompatible with a free, democratic society (U.K.)

Evident in these quotes is the strong historical reference to another risk: since different governments have abused their power over personal data in the past they are capable of doing so in the future.

In terms of the perceptions of the public sector institutions responsible for IdMS, three discrete types emerged from the analysis. The competence theme points to perceived inability to operate securely a system that stores sensitive personal data. The integrity theme covered the use of personal information for purposes not originally declared and the passing of personal information to third parties without approval. The benevolence theme focused on the question of personal data as a means of overly intrusive surveillance.

A pattern discovered in the findings lay in how the knowledge and interpretation of past events was used as a way of justifying and substantiating their perceptions. In the case of competence, a record of IT failures in public sector was repeatedly mentioned. Lack of trust in the integrity of the government was explained by reference to many mission creep cases. Finally, abuse of personal information by certain regimes in the past suggested the possibility of similar behaviour in the future, should circumstances change.

Intentions to act

Many respondents provided examples of declared intentions to act upon their negative perceptions. Examples of these include:

I will trick them and tell them lies, whenever this is unavoidable ... (Germany)

I will resist it at every turn (U.K.)

I will refuse with all means to give more than my contact data to be stored on electronic IDs! (original punctuation) (Germany)

People I talk to are going to sabotage or resist this. (U.K.)

I will refuse to release any biometric data. I will not have a passport with biometric data issued (Germany)

I will never, never, NEVER have my fingerprints, retina scans or DNA profile taken from me and put on a Government Database. I would rather die than have that happen. (original capital letters) (U.K.)

... The current moves by governments to manage ID amount to attempted theft of my identity. I will fight strongly to protect this. (U.K.)

These declared intentions are decidedly negative and perhaps contain some bravado: they should be interpreted with care, in any case, as the transition from intention to action is often uncertain. Even so, the low risk option of providing false information to the growing number of public data repositories would be simple and effective for the aggrieved citizen.

So far we have presented the core categories that emerged from the data analysis process, encompassing eID–related risk perceptions, trusting beliefs and intentions to act. Before considering the links among these grounded themes, we draw attention to the similarity between findings in the U.K. and Germany. In the early stages of the analysis, as we generated initial descriptive categories, the use of certain metaphors seemed to characterise the respective countries. As we added further data to reach saturation however, we found that the ‘Big Brother’ metaphor in its different guises was not used by U.K. citizens exclusively. Likewise, reference to RFID was more frequent in German comments while U.K. respondents fretted over biometrics. A likely explanation links back to the national context surrounding eID in these countries and the associated public debates. Nonetheless, the issues raised by both German and U.K. citizens were similar, essentially questioning the reliability and suitability of ID–related IT, and therefore coded under one sub–category of information risk.

As the analysis scaled up through the process of grouping higher–level categories into broader themes (Urquhart, et al., 2010), the national differences between U.K. and Germany appeared even less significant, such that the refined analytical framework consistently represented both countries. Our analysis concludes next with an attempt to offer an integrative account, bringing together findings regarding risk perceptions, trusting beliefs and intentions to act. In so doing, we move beyond codling to exploring the relationships between core categories, which then form a ‘grounded theory’ (Urquhart, et al., 2010).

Towards a trust–risk model

Miles and Huberman (1984) describe a causal model or network as “a visual rendering of the most important independent and dependent variables in a field–study and of the relationships between them” [18]. The network might be causal, or less strongly, one of influences and associations. As well as using abductive logic (see above in Data Analysis section) the network or model can be developed with a deductive research approach based upon existing theory looking for data to either confirm or refute a preliminary model. Or else an inductive research approach can be adopted whereby a causal network or model is constructed ‘ab initio’ from the mentions made of causal links in the case data. However “by the end of the data gathering both species of researcher are about at the same place ... induction and deduction are dialectical rather than mutually exclusive research procedures” [19]. The model that emerges can then afterwards be confronted with existing theory (Urquhart, et al., 2010).

In pulling together the various threads, our grounded analysis suggested a preliminary model depicting the role of past events as interpreted by citizens (referred to as background views) in forming and justifying trusting beliefs, which generate in turn a set of risk perceptions. Perceived risks give rise to certain behavioural intentions, which subsequently indicate risk–related behaviour. The proposed model is illustrated in Figure 1 and further explained by drawing on the grounded analysis from which it emerged. The arrows indicate links or associations between concepts that are supported by the data analysis. In the Discussion section we confront our model with existing theory on trust and risk in similar contexts.

 

A grounded trust-risk model for public sector IdMS
Figure 1: A grounded trust–risk model for public sector IdMS.

 

The first step of early conceptualisation was to try to see the data in terms of a conceptual framework. The observations were grouped roughly into related themes, which gradually gained form and sharper identity as we saw they persisted through successive cycles of our analysis, and took on more solid outline. What emerged from our analysis was a sequence whereby what we have termed the background views influenced the kinds of beliefs that our respondents held in relation to trust vis–à–vis the public authorities, and such beliefs by turns influenced the risk perceptions and then the intentions (to act). In terms of influence, the model asserts that the adverse events experienced by citizens in the background view theme were ultimately associated with the declared recalcitrance of what we have termed the risk-related intentions, although this remains to be further researched before it can be deemed stable. Within this structure can be discerned something similar to what Miles and Huberman refer to as different “causality” streams; one might be, for instance, the logical thread of influences linking IT failures, belief in government incompetence, information risk, and potentially withholding personal information [20].

Background views and low trust. ‘Background views’ was a theme that took shape from specific past events that were constantly referred to in our data, and ranged from the ‘IT failures’ by governments, to ‘function creep’ where government authorities use personal information for purposes not previously approved by citizens, and to what we have called ‘political history’ — in respect of citizens’ rights. Such views link to underlying trust beliefs: specifically, the IT failures were associated with a negative assessment of competence, function creep cases influence the questioning of the integrity of government authorities and their way of handling personal and sensitive data, while the political history of totalitarianism was linked with doubts over the government’s goodwill and commitment to act in the best interests of the public, especially over the foreseeable future.

Low trust and high–risk perceptions. Low trust in the public authorities influenced specific risk perceptions relating to information security, economic, and socio–political implications. Perceiving the government as incompetent affects both the data integrity and availability aspects of information risk perceptions because both require technical capabilities, technological innovativeness and information management abilities seen to be missing in the governments of the U.K. and Germany. The perception of incompetence also influences the economic risk perception specifically in the risk to public funds, with respondents bemoaning the waste of public assets. Lack of integrity, by contrast, is associated with information risk, in particular with questions about the confidentiality related to unauthorised access, such as in function creep cases. Absence of benevolence was linked to a perceived socio–political risk where the use of interoperable IdMS could trigger a detrimental (to citizens) shift in the balance of power between state and citizen.

High–risk perceptions, risk–related intentions and behaviour. Risk perceptions were seen to influence citizens’ intentions to act, what we have termed risk–related intentions, such as withholding or falsifying personal information. In other words, such intentions could lead to instrumental action such as resistance in the form of non–use or outright misuse. More than simply recalcitrance such intentions could perhaps be interpreted as risk reducing actions that offer the citizen the chance of protecting his or her privacy by failing to supply the correct personal information and keep it up–to–date.

 

++++++++++

Discussion

In this section we critically discuss the results of our research and consider the theoretical contribution of this work by comparing our grounded account with previous trust–risk models. Urquhart, et al. (2010) see theoretical integration as attempting to relate the theory that has emerged to “other theories in the same or similar field”. We also reflect on the practical implications arising from this study in terms of fostering a citizen–centered development of eID.

Impacting trust and risk

The model that emerges from our study is rather different from other trust-risk models proposed in the literature (cf. Gefen, et al., 2003). This grounded model threads a connection from background views — constructed from past experience of IT failures, from function creep cases, and from political history, to trust beliefs. These beliefs are associated with risk perceptions, which form the background of stated intentions and might trigger possible risk–related actions. Risk perceptions, the subjective element of risk, are informed by the trust beliefs. In some literature, on the contrary, a common approach has been to see trust and risk as co–determined: high trust associated with low risk, or vice versa. For example, de Ruyter, et al. (2001) find that a high organisational reputation significantly increases the consumers’ trust in an eservice, while a high amount of perceived risk towards that eservice sees the level of trust decreased. Mayer, et al. (1995) emphasise the dependence of trust on the a priori existence of risk and see trust in terms of the willingness to assume risk: trust only has meaning when risk exists and indeed is defined in terms of risk. At the same time, there are other approaches that see the relationship inversely, with risk affecting trust. Comegys, et al. (2009) instead of asserting a unidirectional relationship between trust and risk or vice versa, posit a mutually conditioning relationship between trust and risk. They find that the amount of trust towards a certain online vendor affects how the customer perceives the risks associated with purchasing from that vendor, and that the amount of risk associated with online shopping affects the customer’s trust in online vendors generally. If a risk affects the purchase process it will also affect the trust profile.

This interdependence begs the question if trust and risk are lock–stepped together could one be redundant? For only one of the two is required to complete an assessment, since the other adjusts accordingly. A conceptual consideration of this question deserves further research to clarify and theorise the relationship between trust and risk. Pragmatically, however, we argue both categories of trust beliefs and risk perceptions are useful and therefore justified. The value lies in the distinctive practical implications that emerge. In our study the findings suggesting lack of trustworthiness of identity management authorities should direct attention to the challenge of enhancing public trust. Findings pointing to high risk perceptions on the other hand suggest actions aimed at reducing risk, which involves different tools and techniques that can be incorporated in both the design and management of IdMS.

Trust enhancement transparency and emotions

Considering for the moment options for trust enhancement and repair, we take into account findings involving the three dimensions of trustworthiness, as well as the interpretations of past events from which negative judgments emerged. A low level of trust in government is associated with perceived shortcomings in both morality (integrity and benevolence) and competence. This double trust failure renders any recovery task much more difficult because research on trust repair suggests that the remedies for integrity violations are at odds with those for competence violations. For example, Kim and Dirks (2006) find that a full apology is more successful for a competence–based violation, whereas for an integrity–based violation an attempt to mitigate the blame with external attribution (i.e., blame someone else) or indeed an outright denial works better. Governments do apologise to citizens: for example when U.K. Prime Minister Gordon Brown apologised in Parliament on 27 November 2007 for the loss of two CD disks containing the personal information of 25 million people [21]. Although competence is an important issue, our data shows that the concerns about function creep relate to beliefs in integrity–based violations, and for these cases the trust repair research suggests that apology is not the best strategy.

So what steps should governments take in order to retrieve the situation? More transparency for citizens on the processing of their personal data could go a long way towards altering the negative perceptions, because control is only possible with knowledge of what is happening. There was a clear distinction in our data between the benevolence aspect of trustworthiness, with the forceful use of emotive expressions and rhetoric, and the competence and integrity aspects, which respondents tended to express in more rationalistic terms. Recent research based on neurological analysis showed a clear distinction in the brain areas associated with the dimensions of trust and distrust, with credibility and non–credibility being mostly associated with the brain’s more cognitive areas, while benevolence and malevolence are mostly associated with the brain’s more emotional areas (Benbasat, et al., 2010). Benevolence appears to have potential for restoring trust; perhaps because it speaks to the emotional rather than the rational side, it offers a more direct route to the perceptions of citizens. Future research could explore this further and also examine the ethical considerations involved.

So from a pragmatic point of view, from that of the public authorities who desire to introduce eID systems into eGovernment, one way of countering the damagingly high risk perceptions that may jeopardise acceptance by citizens might be to enhance trust. Efforts aimed at rebuilding and enhancing trust, as indicated above, may prove useful, however the difficulty and limits of achieving desired levels of trust should be acknowledged. Slovic (2000) referred to the unfortunate asymmetry between the difficulty of creating trust and the ease of destroying it: ‘trust is fragile, it is typically created rather slowly but it can be destroyed in an instance by a single mishap ... when trust is lost it may take a long time to rebuild it to the former state ... lost trust may never be regained.’ [22]

Risk reduction, benefit and acceptance

The limitations in trust building and repair might suggest an alternative strategy of reducing risk perceptions, for instance, with a renewed focus on standards, certification by third parties and introducing Privacy Enhancing Technologies (PETs) into eGovernment systems built explicitly on a ‘no trust’ assumption (Gürses, 2010). Other methods that can reduce the perceived risks may include the deployment of appropriate security technologies and of managerial and organisational policies, such as those contained in the “Data Handling Procedures in Government” [23] (U.K.) report of June 2008.

In unpicking the complexities of the trust–risk ensemble, emphasis should also be placed on the substance of risk perceptions since these might influence strongly how the actors behave. The question of citizen–centricity and public acceptance of the new technology has been an important policy discussion on eID, in which perceptions of risk play a crucial role. Grabner–Kräuter and Kaluscha (2003) found that for eCommerce the subjective aspects of risk perception are more important than objective security: the perceived risk outweighs the “real” risk. Another strategy for reducing perceived risk might lie in stressing perceived benefits. Poortinga and Pidgeon (2005) identify two models of especial relevance to our findings in this respect. One might be characterised as a pre–evaluation trust model where low levels of trust lead to higher risk expectations, lower benefits and indirectly lower acceptance. In this model high trust has the opposite effects, impacting on the cognitive process of forming beliefs on risks and benefits that eventually lead to an evaluative judgment. The other model, called an associationist view of trust, might be seen as a post–evaluation trust model where the causal relationship between trust and risk beliefs is considered spurious in any case because both are coloured by acceptance (of the risky event). In this way, trust is portrayed as the consequence of an affective evaluation. In effect, if the change has been already accepted on affective grounds then trust and risk are readjusted accordingly.

The critical balance between risks and benefits was reflected in clear evidence from our data: one respondent stated starkly: I don’t see a lot of value but a lot of risk. For many respondents value is not seen as something the citizen can aspire to: interoperable IdMS as government–centric not citizen–centric. Another respondent expressed a positive assessment of the idea but felt that the incompetence of the public authorities in implementing the new technology would outweigh what he or she admired as the potential benefits:

... theoretically the electronic identity card is a smasher; unfortunately politicians tend to be technically insufficient at implementation causing significantly more harm than potential benefits to the citizens

What might be concluded from this is the need for policy–makers to stress the potential benefits for the citizen, since acceptance is based ultimately on how the balance of risks and benefits is perceived. The crucial issue in the acceptance of IdMS is the citizens’ prevailing low trust not in the technology but in the government responsible for their personal data. It is important that citizens anticipate benefits from new identity systems sufficient to outweigh the perceived risks. People are risk–averse if the stakes for losses are high, yet risk–tolerant if the stakes for gain are high (Khaneman and Tversky, 1979).

Exploring the perceptions of government stakeholders, Seltsikas and O’Keefe (2010) have found that the value and outcome expected from public identity management systems and eID include reducing online fraud and raising public trust. Cost reduction was also cited as a major desirable outcome influencing the need for interoperable identity service. A striking divide is revealed in the perceptions of government and citizens; benefits for the former are risks for the latter. If perceived benefit and public value are important evaluation considerations for all stakeholders in the development of eGovernment (Rowley, 2010) and, indeed, a pre–requisite of the citizen–centric vision, then policy–makers must address this emerging gap.

Worrying for the public authorities are the avowed intentions of some respondents to defy the purposes of eID with recalcitrant behavior. But there is often a gap between what people say they will do and what they actually do, particularly in affairs of their privacy and self–management of identity information. While consistently assigning high value to their privacy (Schneier, 2010), in their actual decision–making individuals at times seem to diverge considerably from their espoused perceptions. A small but instant gratification will often result in voluntary disclosure of personal data (Acquisti, 2004). Westin’s longitudinal studies characterised most people as privacy pragmatists, reflecting a ‘show me and I’ll decide’ approach towards sharing and risking identity information (Taylor, 2003). Once again we see the importance of perceived benefits, not so much in affecting the perceptions of risk, but in the willingness to accept it under certain conditions. Despite the declared intention, a benefit may lead to actions that ultimately mean accepting the risk.

 

++++++++++

Conclusion and future research

This research has contributed to understanding public perceptions of new identity management systems, revealing how citizens’ trust and risk perceptions unfold in relation to eID. In focusing on eGovernment, the study has proposed new aspects of theory and implications in an area that needs more research. The study adds to the growing research on the relationship between trust and risk, in which the focus until now has been on trust, especially in eCommerce. The literature does not offer a consensus about how trust and risk relate and the models often elaborate one without the other, mostly trust without risk. Our contribution is in suggesting that trust beliefs underpin the perception of risk. The research further offers the notion of experience–based ‘background views’ — a grounded construct that captures the ways in which past events provide for the construction of trust beliefs. While interpersonal trust is based on direct interaction, institution–based trust, relevant in citizen–government relationship, must rely on something else. This research sheds new light on this question, so far addressed mainly in studies of consumer–vendor relationships. In the eCommerce context, the question raised generally concerns the consumer’s inclination to trust a vendor not previously known to him or her, a situation referred to as initial trust (McKnight, et al., 1998). In such models, constructs capturing psychological characteristics, for example, are incorporated and used to explain predispositions to trust. By contrast, in the public sector context the clients (citizens) are only too familiar with the service provider. The familiarity of citizens with governmental authorities played a key role in our account, illuminating the formative context in which trust in public institutions is constructed and judgments are rationalized. The issues of trust in identity management certainly go beyond just the public sector: social networks such as Facebook and search engines such as Google have been in the spotlight. Both their business models rest on exploiting personal data while offering authentication of identity services. Both have had substantial privacy issues where their customers felt that liberties were being taken with their personal data.

Further work is nevertheless needed on the theorisation of the related concepts of risk and trust. Our proposed model, the result of a single grounded theory research undertaken in two countries needs to be validated, tested and expanded in other countries to see whether citizens outside Europe have similar or perhaps an entirely different set of responses. The influence stream proposed from experience to trust beliefs, risk perceptions and risk–related intentions needs to be further investigated in different eGovernment contexts. Future research should also address the evident over–representation of male respondents in this study.

Practical implications are also offered by this study, arising from findings that indicate an overwhelming lack of trust and high perception of risk, both in multiple dimensions that apply to interoperable IdMS. At a pragmatic level, the emerging need for trust enhancement and risk reduction may best be addressed separately, despite their interdependence. In order to facilitate public acceptance of eID, public authorities should develop methods for trust (re)building, such as introducing transparency tools in the processing of personal data, although more research is clearly needed here (Morgeson, et al., 2011). While well–directed efforts in terms of trust repair should lower the perception of risk, alternative strategies for risk reduction are warranted. Finally, our study suggests that the benefits and value of interoperable identity systems need to be discussed in terms of risk reduction. Benefits for the public should be clear and real if the new systems are to win public acceptance. End of article

 

About the authors

Dr. Ruth Halperin is currently a lecturer at Haifa University, Israel. She holds a Ph.D. in Information Systems from the London School of Economics and Political Science, where she was a Research Fellow in the Information Systems and Innovation Group of the Department of Management. Her current research interests are in information risk, digital identity, privacy and online self–disclosure.
E–mail: ruth [dot] halperin [at] gmail [dot] com

James Backhouse holds degrees from the Universities of Exeter, London and Southampton. He was awarded a Ph.D. (Semantic Analysis in Information Systems Development) from the LSE, where he is Emeritus Reader in the Department of Management. The author of many publications in the field of information and security, his research currently examines information security from a social sciences perspective and centres on power, responsibility and trust and identity. His work has been published in MISQ, EJIS, ISJ, CACM, and JAIS, amongst others. He is currently Senior Associate Editor of the European Journal of Information Systems and was, until 2010, an Editor–in–Chief of the online Springer journal Identity in the Information Society.
E–mail: james [dot] backhouse [at] lse [dot] ac [dot] uk

 

Acknowledgements

This research was initiated as part of the EU project FIDIS (Future of Identity in the Information Society) under the 6th Framework Programme for Research and Technological Development, project reference number 507512. We thank all FIDIS project members who helped in the collection of the data, especially in Germany and the United Kingdom.

 

Notes

1. http://ec.europa.eu/idabc/en/document/2033/5849.html.

2. http://ec.europa.eu/isa/index_en.htm.

3. http://ec.europa.eu/idabc/servlets/Doc?id=25286.

4. http://europa.eu/rapid/pressReleasesAction.do?reference=MEMO/10/200&format=HTML&aged=0&language=EN&guiLanguage=en.

5. http://www.ccbe.eu/fileadmin/user_upload/NTCdocument/en_annex_technical_s1_1192451405.pdf.

6. King and Cottrerill, 2007, p. 351.

7. http://www.ccegov.eu/downloads/Handbook_Final_031207.pdf.

8. http://www.epractice.eu/files/download/awards/ResearchReport2007.pdf.

9. http://identityproject.lse.ac.uk/identityreport.pdf.

10. http://ec.europa.eu/public_opinion/archives/ebs/ebs_359_en.pdf.

11. http://ec.europa.eu/public_opinion/archives/eb/eb64/eb64_en.htm.

12. EC NoE Future of Identity in the Information Society (www.fidis.net).

13. Kelle, 2007, p. 197.

14. Butler and O’Reilly, 2010, p. 14.

15. These include confidentiality, data integrity and availability and commonly known as the CIA triad. See, e.g., Dhillon (2007).

16. Renn, 2008, p. 223.

17. http://www.gesetze-im-internet.de/.

18. Miles and Huberman, 1984, p. 132.

19. Miles and Huberman, 1984, p. 155.

20. Miles and Huberman, 1984, p. 160.

21. http://www.guardian.co.uk/politics/2007/nov/21/immigrationpolicy.economy3.

22. Slovic, 2000, p. 319.

23. http://www.cabinetoffice.gov.uk/resource-library/data-handling-procedures-government.

 

References

A. Acquisti, 2004. “Privacy in electronic commerce and the economics of immediate gratification,” paper presented at the EC ’04: ACM Electronic Commerce Conference, at http://www.heinz.cmu.edu/~acquisti/papers/privacy-gratification.pdf, accessed 20 March 2012.

M. Agar, 1980. The professional stranger: An informal introduction to ethnography. New York: Academic Press.

P. Ågerfalk, 2004. “Grounding through operationalization: Constructing tangible theory in IS research,” paper presented at the ECIS 2004: 12th European Conference on Information Systems, at http://www.vits.org/publikationer/dokument/425.pdf, accessed 20 March 2012.

G. Aichholzer and S. Strauß, 2010. “Electronic identity management in e–government 2.0: Exploring a system innovation exemplified by Austria,” Information Polity, volume 15, numbers 1–2, pp. 139–152.

H. Akkermans and J. Vennix, 1997. “Clients’ opinions on group model–building: An exploratory study,” System Dynamics Review, volume 13, number 1, pp. 3–31.http://dx.doi.org/10.1002/(SICI)1099-1727(199721)13:1<3::AID-SDR113>3.0.CO;2-I

J. Backhouse and R. Halperin, 2009. “Approaching interoperability for identity management systems,” In: K. Rannenberg, D. Royer, and A. Deuker (editors). The future of identity in the information society: Challenges and opportunities. Berlin: Springer, pp. 245–268.

I. Benbasat, D. Gefen, and P. Pavlou, 2010. “Introduction to the special issue on novel perspectives on trust in information systems,” MIS Quarterly, volume 34, number 2, pp. 367–371.

A. Bryant and K. Charmaz, 2007. “Grounded theory in historical perspective: An epistemological account,” In: A. Bryant and K. Charmaz (editors). Sage handbook of grounded theory. London: Sage, pp. 31–57.

T. Butler and P. O’Reilly, 2010. “Recovering the ontological foundation of the grounded theory method,” ICIS 2010: Proceedings of the International Conference on Information Systems, at http://aisel.aisnet.org/icis2010_submissions/75/, accessed 20 March 2012.

D. Chen and G. Doumeingts, 2003. “European initiatives to develop interoperability of enterprise applications — Basic concepts, framework and roadmap,” Annual Reviews in Control, volume 27, number 2, pp. 153–162.http://dx.doi.org/10.1016/j.arcontrol.2003.09.001

C. Comegys, M. Hannula, and J. Väisänen, 2009. “Effects of consumer trust and risk on online purchase decision–making: A comparison of Finnish and United States students,” International Journal of Management, volume 26, number 2, pp. 295–308.

K. de Ruyter, M. Wetzels, and M. Kleijnen, 2001. “Customer adoption of e-service: An experimental study,” International Journal of Service Industry Management, volume 12, number 2, pp. 184–207.http://dx.doi.org/10.1108/09564230110387542

G. Dhillon, 2007. Principles of information systems security: Text and cases. Hoboken, N.J.: Wiley.

D. Gefen, R. Srinivasan Rao, and N. Tractinsky, 2003. “The conceptualization of trust, risk and their relationship in electronic commerce: The need for clarifications,” HICSS ’03: Proceedings of the 36th Hawaii International Conference on System Sciences, pp. 192–201.

B. Glaser, 1978. Theoretical sensitivity: Advances in the methodology of grounded theory. Mill Valley, Calif.: Sociology Press.

B. Glaser and A. Strauss, 1967. The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine.

S. Grabner–Kräuter and E. Kaluscha, 2003. “Empirical research in on–line trust: A review and critical assessment,” International Journal of Human–Computer Studies, volume 58, number 6, pp. 783–812.http://dx.doi.org/10.1016/S1071-5819(03)00043-0

S. Gürses, 2010. “PETs and their users: A critical review of the potentials and limitations of the privacy as confidentiality paradigm,” Identity in the Information Society, volume 3, number 3, pp. 539–563.http://dx.doi.org/10.1007/s12394-010-0073-8

F. Henri, 1992. “Computer conferencing and content analysis,” In: A. Kaye (editor). Collaborative learning through computer conferencing: : The Najaden papers. Berlin: Springer, pp. 117–136.

H.M. Government. Cabinet Office, 2006. “Transformational government enabled by technology: Annual report,“ at http://www.official-documents.gov.uk/document/cm69/6970/6970.pdf, accessed 20 March 2012.

J. Hudson, 2006. “Institutional trust and subjective well–being across the EU,” Kyklos, volume 59, number 1, pp. 43–62.http://dx.doi.org/10.1111/j.1467-6435.2006.00319.x

IDABC, 2005. “European interoperability framework for pan–European eGovernment services,” at http://ec.europa.eu/idabc/en/document/3761/5845.html, accessed 20 March 2012.

P. Jaeger and J. Bertot, 2010. “Designing, implementing, and evaluating user–centered and citizen–centered e–government,” International Journal of Electronic Government Research, volume 6, number 2, pp. 1–17.http://dx.doi.org/10.4018/jegr.2010040101

D. Kahneman and A. Tversky, 1979. “Prospect theory: An analysis of decision under risk,” Econometrica, volume 47, number 2, pp. 263–292.http://dx.doi.org/10.2307/1914185

U. Kelle, 2007. “The development of categories: Different approaches in grounded theory,” In: A. Bryant and K. Charmaz (editors). Sage handbook of grounded theory. London: Sage, pp. 191–213.

P. Kim and K. Dirks, 2006. “When more blame is better than less: The implications of internal vs. external attributions for the repair of trust after a competence– vs. integrity–based trust violation,” Organizational Behavior and Human Decision Processes, volume 99, number 1, pp. 49–65.http://dx.doi.org/10.1016/j.obhdp.2005.07.002

T. Kinder, 2003. “Mrs Miller moves house: The interoperability of local public services in Europe,” Journal of European Social Policy, volume 13, number 2, pp. 141–157.http://dx.doi.org/10.1177/0958928703013002003

S. King and S. Cotterill, 2007. “Transformational government? The role of information technology in delivering citizen–centric local public services,” Local Government Studies, volume 33, number 3, pp. 333–354.http://dx.doi.org/10.1080/03003930701289430

R. Kramer, 1999. “Trust and distrust in organizations: Emerging perspectives, enduring questions,” Annual Review of Psychology, volume 50, pp. 569–598.http://dx.doi.org/10.1146/annurev.psych.50.1.569

H. Kubicek and T. Noack, 2010. “Different countries–different paths extended comparison of the introduction of eIDs in eight European countries,” Identity in the Information Society, volume 3, number 1, pp. 235–245.http://dx.doi.org/10.1007/s12394-010-0063-x

A. Lee and R. Baskerville, 2003. “Generalizing generalizibility in information systems research,” Information Systems Research, volume 14, number 3, pp. 221–243.http://dx.doi.org/10.1287/isre.14.3.221.16560

M. Lips, 2010. “Rethinking citizen–government relationships in the age of digital identity: Insights from research,” Information Polity, volume 15, number 4, pp. 273–289.

M. Lips, 2007. “E–government under construction: Challenging traditional conceptions of citizenship,“ In: P. Nixon and V. Koutrakou (editors). E–government in Europe: Re–booting the state. London: Routledge, pp. 33–47.

LSE Identity Project, 2010. “Identity Project resources,” at http://identityproject.lse.ac.uk, accessed 20 March 2012.

N. Luhmann, 1993. Risk: A sociological theory. Translated by R. Barrett. New York: A. de Gruyter.

D. Lupton, 1999. Risk. London: Routledge.

W. Lusoli and C. Miltgen, 2009. “Young people and emerging digital services: An exploratory survey on motivations, perceptions and acceptance of risks,“ at http://ftp.jrc.es/EURdoc/JRC50089.pdf, accessed 20 March 2012.

P. Martin and B. Turner, 1986. “Grounded theory and organizational research,” Journal of Applied Behavioral Science, volume 22, number 2, pp. 141–157.http://dx.doi.org/10.1177/002188638602200207

R. Mayer, J. Davis, and F. Schoorman, 1995. “An integrative model of 0rganizational trust,” Academy of Management Review, volume 20, number 3, pp. 709–734.

D. McKnight, V. Choudhury, and C. Kacmar, 2002. “Developing and validating trust measures for e–commerce: An integrative typology,” Information Systems Research, volume 13, number 3, pp. 334–359.http://dx.doi.org/10.1287/isre.13.3.334.81

D. McKnight, L. Cummings, and N. Chervany, 1998. “Initial trust formation in new organizational relationships,” Academy of Management Review, volume 23, number 3, pp. 473–490.

M. Miles and A. Huberman, 1984. Qualitative data analysis: A source book of new methods. Beverly Hills, Calif.: Sage.

F. Morgeson, D. VanAmburg, and S. Mithas, 2011. “Misplaced trust? Exploring the structure of the e–government–citizen trust relationship,” Journal of Public Administration Research and Theory, volume 21, number 2, pp. 257–283.http://dx.doi.org/10.1093/jopart/muq006

W. Orlikowski, 1993. “CASE tools are organizational change: Investigating incremental and radical changes in systems development,” MIS Quarterly, volume 17, number 3, pp. 309–340.http://dx.doi.org/10.2307/249774

B. Otjacques, P. Hitzelberger, and F. Feltz, 2007. “Interoperability of e–government information systems: Issues of identification and data sharing,” Journal of Management Information Systems, volume 23, number 4, pp. 29–51.http://dx.doi.org/10.2753/MIS0742-1222230403

P. Pavlou and D. Gefen, 2004. “Building effective online marketplaces with institution–based trust,” Information Systems Research, volume 15, number 1, pp. 37–59.http://dx.doi.org/10.1287/isre.1040.0015

W. Poortinga and N. Pidgeon, 2005. “Trust in risk regulation: Cause or consequence of the acceptability of GM food?” Risk Analysis, volume 25, number 1, pp. 199–209.http://dx.doi.org/10.1111/j.0272-4332.2005.00579.x

G. Prokopiadou, C. Papatheodorou, and D. Moschopoulos, 2004. “Integrating knowledge management tools for government information,” Government Information Quarterly, volume 21, number 2, pp. 170–198.http://dx.doi.org/10.1016/j.giq.2004.02.001

D. Ray, U. Gulla, S. Dash, and M. Gupta, 2011. “A critical survey of selected government interoperability frameworks,” Transforming Government, volume 5, number 2, pp. 114–142.http://dx.doi.org/10.1108/17506161111131168

D. Recordon and D. Reed, 2006. “OpenID 2.0: A platform for user–centric identity management,” DIM ’06: Proceedings of the Second ACM Workshop on Digital Identity Management, pp. 11–16.

C. Reddick, 2010. “Citizen–centric e–government,” In: C. Reddick (editor). Homeland security preparedness and information systems: Strategies for managing public policy. Hershey, Pa.: Information Science Reference, pp. 45–75.

J. Reichertz, 2007. “Abduction: The logic of discovery of grounded theory,” In: A. Bryant and K. Charmaz (editors). Sage handbook of grounded theory. London: Sage, pp. 214–228.

O. Renn, 2008. Risk governance: Coping with uncertainty in a complex world. London: Earthscan.

J. Rowley, 2010. “e–Government stakeholders — Who are they and what do they want?” International Journal of Information Management, volume 31, number 1, pp. 53–62.http://dx.doi.org/10.1016/j.ijinfomgt.2010.05.005

S. Saxby, 2006. “eGovernment is dead: Long live transformation,” Computer Law & Security Report, volume 22, number 1, pp. 1–2.http://dx.doi.org/10.1016/j.clsr.2005.12.002

B. Schneier, 2010. “Privacy and control,” Journal of Privacy and Confidentiality, volume 2, number 1, pp. 3–4, and at http://www.schneier.com/paper-privacy-and-control.html, accessed 20 March 2012.

B. Schnittger, 2005. “Introducing IDABC: European integration by electronic means,” SYNeRGY, pp. 3–6.

H. Scholl, 2005. “Interoperability in e–government: More than just smart middleware,” HICSS ’05: Proceedings of the 38th Annual Hawaii International Conference on System Sciences. p. 123.

F. Schoorman, R. Mayer, and J. Davis, 2007. “An integrative model of organizational trust: Past, present, and future,” Academy of Management Review, volume 32, number 2, pp. 344–354.http://dx.doi.org/10.5465/AMR.2007.24348410

P. Seltsikas and R. O’Keefe, 2010. “Expectations and outcomes in electronic identity management: The role of trust and public value,” European Journal of Information Systems, volume 19, number 1, pp. 93–103.http://dx.doi.org/10.1057/ejis.2009.51

P. Slovic, 2000. The perception of risk. London: Earthscan.

A. Strauss and J. Corbin, 1990. Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, Calif.: Sage.

H. Taylor, 2003. “Most people are ‘privacy pragmatists’ who, while concerned about privacy, will sometimes trade it off for other benefits,” HarrisInteractive (19 March), at http://www.harrisinteractive.com/vault/Harris-Interactive–Poll–Research–Most–People–Are–Privacy–Pragmatists–Who–While–Conc–2003–03.pdf, accessed 20 March 2012.

M. Threlfall, 2003. “European social integration: Harmonization, convergence and single social areas,” Journal of European Social Policy, volume 13, number 2, pp. 121–139.http://dx.doi.org/10.1177/0958928703013002002

J. Turow and M. Hennessy, 2007. “Internet privacy and institutional trust: Insights from a national survey,” New Media & Society, volume 9, number 2, pp. 300–318.http://dx.doi.org/10.1177/1461444807072219

C. Urquhart, H. Lehmann, and M. Myers, 2010. “Putting the ‘theory’ back into grounded theory: Guidelines for grounded theory studies in information systems,” Information Systems Journal, volume 20, number 4, pp. 357–381.http://dx.doi.org/10.1111/j.1365-2575.2009.00328.x

P. Verdegem and L. Hauttekeete, 2010. “A user–centric approach in e–government policies: The path of effectiveness?” In C. Reddick (editor). Citizens and e–government: Evaluating policy and management. Hershey, Pa.: Information Science Reference, pp. 20–36.

Y. Wang and J. Vassileva, 2003. “Trust and reputation model in peer–to–peer networks,” P2P ’03: Proceedings of the Third International Conference on Peer–to–Peer Computing, pp. 150–157.

T. Yamagishi and M. Yamagishi, 1994. “Trust and commitment in the United States and Japan,” Motivation and Emotion, volume 18, number 2, pp. 129–166.http://dx.doi.org/10.1007/BF02249397

R. Yin, 1984. Case study research: Design and methods. Second edition. Thousand Oaks, Calif.: Sage.

L. Zucker, 1986. “Production of trust: Institutional sources of economic structure,” Research in Organizational Behavior, volume 8, pp. 53–111.

 


Editorial history

Received 14 November 2011; revised 28 February 2012; revised 29 March 2012; accepted 29 March 2012.


Copyright © 2012, First Monday.
Copyright © 2012, Ruth Halperin and James Backhouse.

Trust, risk and eID: Exploring public perceptions of digital identity systems
by Ruth Halperin and James Backhouse
First Monday, Volume 17, Number 4 - 2 April 2012
http://firstmonday.org/ojs/index.php/fm/article/view/3867/3196
doi:10.5210/fm.v17i4.3867





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.