WRITTEN ON December 22nd, 2006 BY William Heath AND STORED IN Design: Co-creation, Foundation of Trust, Ideal Goverment - project, What do we want?

We can’t say who said this or where, but here’s a note Ruth and I drafted up after a terrific evening’s conversation with friends and colleagues about trust in e-enabled public services. Thanks to all who came and contributed. See you all in 2007; Happy Christmas meanwhile.

——————————

It’s accepted wisdom to say that trust lies at the heart of successful e-enabled public services. But do we know what we mean by trust? Are we all talking about the same thing?

The current “Big Opt-Out” campaign challenges the core of Connecting for Health. Now even if the CfH contractors deliver national electronic patient records the programme fails at the social level on the issue of trust if patients ask GPs not to submit their records, and GPs are sympathetic. The same applies to personalised services based on ID management and the No2ID campaign, or biometrics in schools and Leavethemkidsalone.

The awkward squad is “joining up” faster than government. Government needs them as critical friends.

Trust and usage are like the chicken and the egg: which comes first? Must we prove e-enabled services are worthy of trust before we adopt them (as the Big Opt-Out campaigners suggest)? Or should we go ahead and adopt systems like the electronic record, ID Management and the Childrens’ Index so we can show the fears around these well-intentioned systems are groundless?

The instinctive JFDI (just *** do it) approach of compulsion leads to today’s paradoxical situation where service providers are set to force supposedly “citizen-centric” services on everyone, perhaps hoping the vocal minority will fall into line and drop their objections to centralised and personalised services and data sharing, or perhaps simply not caring what they think because they are unrepresentative.

Service delivery is about human decisions rather than databases. We can either support initiative or require process. We can create databases until the cows come home – and we do. But the Victoria Climbie’s case was one of human failings. Its legacy may be joined up services that we trust. Or it may destroy the trust of a generation of children in confidential public services, which is not what Lord Laming would wish to be remembered for.

A small irony is that, recently brought into the country as she was, Victoria Climbie would not even have been on the new Children’s Index.

One conundrum is: how can we expect people to trust joined-up government when public services are made up of many parts that do not trust each other? They work differently, don’t co-operate, and hold customer data to different standards and for different purposes. This isn’t about technology that doesn’t connect, it’s about a culture of mistrust. Do we have to correct that first, and tackle the enormous culture change of making public services work together as a whole? Recalling Montesquieu’s lesson about the importance of the separation of powers it would probably be undesirable as well as unfeasible.

Volume of usage is no measure of the success of a compulsory system, or one where there is no alternative. But to overuse the word “trust” doesn’t help when it means different things to different people:

The divine Wikipedia points out that
– to the security engineer, the “trusted system” is one you have no choice but to trust. It follows that you want as few as possible. So if the ID System or electronic patient record is a “trusted system” in this sense it’s valid to ask why, and does it have to be? This technical meaning of trust is hard to convey to a lay, generalist audience.
– to the policy analyst a “trusted system” is one which denies people access unless some sort of predictive risk analysis or surveillance based deviation analysis is undertaken. An open society lets people do things and polices the exceptions (cf Beccaria On Crimes and Punishments). But a trusted system uses surveillance and scoring systems based on credit, identity or other risk profiles (cf Foucault’s “carcereal continuum”). Examples are: the no-fly list; credit referencing; surveillance cameras, fences, crash barriers and machine guns around Parliament; and the profiling of children at risk.

Meanwhile customers “trust” brands. We trust Easyjet to be Easyjet and Tesco to be Tesco. And citizens may not like what they get but generally trust the outcome of elections to be fair. The finance director, Treasury or NAO wants to be able to trust in project outcomes. For this you need to measure the benefits, as CJIT did.

If users co-designed and co-created the systems supposed to help them such as the children’s index, health record or ID management, would they be centralised systems, or personal? We didn’t ask. So we may find out the hard way further down the line.

The recent DTI-supported Trustguide work is illuminating on the question of trust online. It introduced groups of lay people to basic cybersecurity issues before conducting focus groups. These found that –

– people don’t trust the Internet
– they know it’s not safe
– but that’s not the point
– they use it because its convenient
– so they rate convenience over trust
– they also want a human face to the things they deal with
– and restitution when things go wrong.

We should take senior officials through that Trustguide research process: we’re willing, and the Trustguide team is too.

If government conceives, designs, builds and measures its services in glorious introspective isolation, it’s hardly surprising people don’t trust it. When services are designed around the end-user, even involving the end-user, trust follows more naturally. This shines through in the Varney report in a way it did not in the original Transformational Government strategy, even though the nature of the web suggests such an approach is the essence of successful contempory e-enabled services.

Perhaps therefore we’d better stop using the word trust for everything good about e-enabled services. And it seems that co-governance, going through this change with people, not doing it to people, is the way to get what we all want.

6 Responses to “Reflections on TRUST in e-enabled public services”

 
PF wrote on December 22nd, 2006 7:32 pm :

Hi William,
compliments of the season and all that. As a supplier to the National Health Service of software for quality improvement I understand the medical model that suggests that all processes and procedures should be based upon empirical evidence. In which case I suppose my Christmas wish would be to see more evidence-based government. The health service and other public
sector organizations are constantly barraged with new initiatives which to those on the front line seem to be developed on a whim by the mandarins in
Whitehall. These initiatives are seldom evaluated properly and may not even reach a conclusion before the next initiative arrives in your inbox.

Kind regards…

Wendy Grossman wrote on December 29th, 2006 10:25 pm :

I thought this was a really nice piece of analysis (hence linking to it from today’s net.wars, in which I forgot to mention the health service…)

I particularly like the note about the reversal of trust between open societies and trusted systems.

As for the commenter’s evidence-based government: it’s going to be really interesting to see what the govt does with the Gower report, which *did* consider the evidence (particularly the economic review it commissioned) and came to conclusions opposed to those we might expect the govt to welcome.

wg

SM wrote on January 2nd, 2007 3:20 pm :

You make good points here, and I believe its also worth pointing out that ‘trust’ is neither ‘transitive’ (X may trust me, and I may trust you, but that doesn’t necessarily mean that X trusts you), nor ‘binary’ (I may trust someone to do some things but not others, usually depending on my personal risk), nor ‘static’ (my trust in someone depends on past experience, and the benefit to me of placing that trust). There is also a useful distinction between a ‘trusted system’ and a ‘trustworthy system’: the mis-named ‘Trusted Computing Platform’ looks as if it will be ‘trustworthy’ in the sense that it has strong security mechanisms, but whether it is ‘trusted’ or not will depend on what it is used for (eg malware protection vs content protection).

‘Trustworthy’ systems are neutral in that trust depends on the controlling organisation (eg some people may mis-trust government however trustworthy the systems are). But of course ‘non-trustworthy’ systems can also undermine trust in an organisation…

Lee Bryant wrote on January 3rd, 2007 4:15 pm :

I think the *way* we trust Tesco to be Tesco vs the government or NHS is very different. Tesco cannot section us, make wrong diagnoses or send us to hospital, whereas the latter can and do.

For me, the fundamental dependency for trust relationships is accountability. The Health service is so systematised and referral-based (with little or no continuity of care) that nobody has any useful accountability, other than in the case of major error (and then you are often too dead to care 😉

The problem here is about structure and scale. Rather than use technology to create mega-data and connected systems, I would rather technology is used to help a GP get to know me and stick around so that they can treat me in the future – i.e. technology should support small-scale relationships, rather than just enable system-wide features.

For intimate things like health, we will never trust a machine or a machine-like system – we tend to trust people or small groups (e.g., personally I trust my GP receptionist more than my GP because she has been there longer and knows more about my family).

I guess what I am trying to say is that trust varies with size and scope of domain. It can be chain-linked (as in LinkedIn or club membership) but it cannot be systematised at the scale of CfH.

Current cultural developments in technology talk about intelligence at the edge of the network, not the centre. We no longer need centralisation for interoperability. Perhaps we should hold our own health records, for example, with GPs holding a backup. Perhaps data should be federated but owned by those closest to it, rather than centralised with unclear ownership.

Not sure if this makes sense, but I can fill in over lunch some time soon 😉

David Clouter wrote on May 18th, 2007 3:12 pm :

As one of the players in the so-called ‘awkward’ squad (LeaveThemKidsAlone) there are a couple of points I’d like to raise.

William, you state that fears around what you describe as “well-intentioned” systems are groundless.

A view perhaps not shared by Microsoft’s Identity Architect, Kim Cameron, at least in respect of the school biometrics debate.

On his identityblog he states: “It drives me nuts that people can just open their mouths and say anything they want about biometrics… without any regard for the facts. There should really be fines for this type of thing – rather like we have for people who pretend they’re a brain surgeon and then cut peoples’ heads open.”

The fact is that a conventional biometric template, as stored on an insecure bog standard school PC, is a person’s lifelong biometric identity. If this is compromised, citizens will face untold difficulties when joined-up e-government becomes a total reality. Nobody in Whitehall seems to have thought this through; trivial use of biometrics could undermine the billions currently being invested in secure systems, for example. Moreover, regrettably it appears that it nobody is willing to engage in a
sensible dialogue with stakeholders on the issues.

Kim Cameron reports that “since early 2005, more than 150 million personal records have been exposed in dozens of incidents, according to information compiled by the Privacy Rights Clearinghouse.”

I wouldn’t take a medicine that hadn’t been properly and adequately trialled, nor would
I consider buying a car that wasn’t guaranteed not to explode in normal traffic conditions. Likewise, shouldn’t I be offered the same guarantees if I am asked to entrust my identity to a third party. Why should government I.T. projects be exempt from these basic common sense requirements?

Just as in the early days of bank cashpoint machines, government will, initially at least,
insist that the systems puts in place are 100% trusted, incapable of error. The burden of
proof to the contrary will rest on the citizen. Without checks and balances, people may
find themselves inadvertently disenfranchised from society.

It is my view that we must indeed prove e-enabled services are worthy of trust before we adopt them. Particularly if there is to be an element of compulsion in respect of citizen participation. With the proposal that electronic patient records should be made available across the EU, I would want assurances that measures were put in place to prevent a doctor in a poorer member state supplementing his or her £100 a month salary by selling my confidential medical records to insurance companies.

Then there is the big picture. Whilst I have no doubt that a ‘debt account’ verified by a
biometric, would assist and streamline government revenue collection, such a system is wide open to abuse.

Criminal gangs, some with inside help, will devote considerable resources to stealing an innocent person’s identity, for use in their next bank robbery. It will be hard for a victim of such an ID theft to mount a defence, since for the whole project to function effectively, trust in the validity of the data must be absolute.

Worse still, although we currently doubt the possibility of it ever happening in Britain,
a future totalitarian government could hijack all the benign systems we are putting in place today to create a nightmare Orwellian state ruling via “cradle to grave” surveillance.

Am I being irrational or alarmist? That’s what many who sounded warnings in 1930s Germany were told. Until it was too late.

How can we guarantee with 100% certainty that all future UK governments will be as benign as the present one?

W wrote on May 18th, 2007 3:27 pm :

> you state that fears around what you describe as “well-intentioned” systems are groundless.

Ooh, I didnt mean to. What I meant to do was write up a summary of a conversation which raised two options:
1. do we seek to prove upfront these services are done in a way which will be OK?
2. Or do we just go ahead and sort out the trust after?

I’d prefer to see the first. What’s actually happening is the second. Sorry if I was ambiguous.

> How can we guarantee with 100% certainty that all future UK governments will be as benign as the present one?

Well, of course we cant. And the present one is only relatively benign. I dont call fibbing and spin benign at all.