In the comments to a piece wrote recently about Blair, Managerialism and Michael Oakeshott, I rather rashly said I’d try to develop the connections between Blair’s style of managerialist government and e-government. This looks as if it’s going to turn into a magnum opus, so this is part the first; an account of how business process are automated in commercial environments — how to do it and how not to do it. This is something of which I have some professional knowledge, particularly as it relates to legal and financial services. Later in the week, I’ll attempt to explore the ways government has seized on these developments as a magic solution to the problems of government, suggest why — incompetence in IT procurement apart — government tends to get hold of the wrong of the stick when it looks at computerisation as a solution to its problems, and suggest reasons why Mr Blair and his colleagues are particularly prone to fall into obvious traps whenever they go near IT projects.
Undoubtedly, the relationships between various private institutions, notably banks and other people offering financial services, have been revolutionised over the last few years by the availability of new technology; rather than conducting our business face-to-face with the manager and staff of a particular bank, who’re familiar with our accounts and have known us as customers over the years, we tend now to have transactions over the phone with anonymous operators in call-centres, or over the internet. This is certainly far more convenient, at least when things work properly, for the customer and, of course, far cheaper for the bank.
However, the banks and financial services industries have learned from bitter experience — all to frequently bitter for both them and their customers — that it’s not just a matter of closing your high street branches and opening a load of call centres. That sounds flippant, but many changes were essentially cost-driven — the board had looked at what all this high street real estate and highly trained staff were costing them, and reasoned that transferring operations to a call-centre in the middle of nowhere, staffed by operators who they paid far less than their customer-facing staff to deal with a far higher volume of customers would save them a lot of money. With the right IT system, they reasoned, they could save a lot of money.
It’s not as simple as that, though. Partly it’s not so simple because you’re subject to various physical constraints that the institution doesn’t always appreciate. A call centre operator can only handle so many transactions an hour. Customers will only tolerate being told their call is important to us so please hold and here’s some Vivaldi while you’re waiting for so long; ideally, at least according to all the research I’ve seen, calls should be answered by an operator within 30 seconds if customers aren’t to start getting ratty, and if you keep people holding for more than a couple of minutes, about half of them will hang up in disgust and most of the people who’ve held will be fuming when their call comes to be answered — not the best way to keep your customers.
And, if too many calls stack up, the switchboard goes into melt-down; people may remember what happened when Orange relaunched mobile phones here one Christmas in the late 1990s; I know for a fact that they’d been warned, and the warnings went unheeded, that they hadn’t allowed for the number of calls their customer service desks and help lines were going to receive, based on the way the launch was going and that the call centre’s call routing system would crash within hours of the launch because it could physically only handle so many calls stacked up on so many lines at a time. It couldn’t, and it did.
Even when the system’s up and running, however, you still need address the question of what you do with the calls. Previously, the bank manager and the desk staff shielded the customers from the bank’s back-office operations; things might be in complete chaos in the back office, but the customer didn’t see any of that — so long as his transactions were handled properly, he didn’t need to know about the mad search for papers and general panic that went on each time they tried to reconcile closing balances for the day. In the long run, he might have noticed this in his bank charges, of course, and the shareholders would notice it in their dividends, but not in his day-to-day transactions.
Now, of course, when a customer phones up to enquire about his balance on one account, he not unreasonably expects to be able to enquire about balances on other accounts he has with the bank and then, quite probably, to start transferring money between the various accounts without being endlessly transferred from department to department and asked to repeat his details all over again. That still happens, obviously, but the customer is now aware that it’s happening and gets irritated by it (at least I do when I phone my bank). That means designing a system whereby the operator can readily access all related accounts and switch between them.
The customer is also, very likely, going to want more than just information. He may well seek advice on an financial product — that is, appropriate to his needs and also one that you’re prepared to sell him (no point in offering him a secured loan if he can’t offer any security, nor any point in offering him savings products designed for people who can afford regularly to save more a month than he has left after his regular outgoings). He may very well also want a decision from you about something — whether you’ll grant him an overdraft, and whether the facilities’ to be temporary or permanent, and what it’s going to cost him, and you might well want to be able to steer him towards a more appropriate product, such as a separate loan account.
Previously, this sort of request would have been dealt with by trained and experienced staff. The manager and assistant managers were paid to know what their bank’s policies were on loans and overdrafts; they were paid to know what their various financial products were, and they were trained to know the customers with whom they had a relationship. This expertise and knowledge has somehow to be replicated by the new call centre system, and this is where designing the system becomes particularly interesting.
In his classic 1947 essay, Rationalism in Politics, which I suggested predicted, with horrible accuracy, Tony Blair and most of his works and pomps, Oakeshott distinguishes between two sorts of knowledge:
Every science, every art, every practical activity requiring skill of any sort, indeed every human activity whatsoever, involves knowledge. And, universally, this knowledge is of two sorts, both of which are always involved in any actual activity. It is not, I think, making too much of it to call them two sorts of knowledge, because (though in fact they do not exist separately) there are certain important differences between them. The first sort of knowledge I will call technical knowledge or knowledge of technique. In every art and science, and in every practical activity, a technique is involved. In many activities this technical knowledge is formulated into rules which are, or may be, deliberately learned, remembered, and, as we say, put into practice; but whether or not it is, or has been, precisely formulated, its chief characteristic is that it is susceptible of precise formulation, although special skill and insight may be required to give it that formulation. The technique (or part of it) of driving a motor car on English roads is to be found in the Highway Code, the technique of cookery is contained in the cookery book, and the technique of discovery in natural science or in history is in their rules of research, of observation and verification.
This sort of knowledge, as Oakeshott observes, is
susceptible of formulation in rules, principles, directions, maxims — comprehensively, in propositions. It is possible to write down technical knowledge in a book
and it’s the sort of knowledge — the knowledge of procedures and business rules, as laid down in the Staff Training Manual — that lends itself very well to call-centre automation. We have various rules about for who we will and won’t open a particular sort of account, and we have rules about the procedures that have to be followed in opening him an account. He has to provide various forms of identification, perhaps, and someone has to check them; we have to check that he isn’t an undischarged bankrupt and, while we’re prepared to open his account, we need to verify his salary before we’ll offer him an overdraft facility since our policy is to base his overdraft on a percentage of his monthly net income. These are all simply expressed in computer code — the operator just feeds in the data, the computer processes it and gives a decision.
However, before we turn to Oakeshott’s second type of knowledge, it’s worth considering some of the problems — and here I speak from experience — of trying to use the Staff Training Manual as a basis for automating business processes. Indeed, I can guarantee that if you try to do this, you will assuredly, and no matter how good the process and how good the manual, code in something doesn’t work. This is because manuals are normally expressed in terms of what should happen. Computers are very literal minded beasts, as we well know, and if you tell the computer that something’s got to be done in a certain way, then if it ain’t done that way, the computer ain’t having it. It won’t accept deviations or substitutes.
In reality, what Staff Manuals are generally about is what must not happen. Many of the procedures are there not because that’s the only way to do something but because doing it that way avoids the results you don’t want. The people who use the manuals, because they’re intelligent human beings, realise this even if they don’t thus articulate it. They don’t follow the book slavishly, or not if they’re any good at their job. If what the manual means is, ‘such-and-such can’t happen unless you’ve checked the following things,’ then you do them in whatever order is best at the time unless you can’t check one unless you’ve first checked the other.
People also, again assuming they’re any good at their job, use their own initiative when — as will invariably happen — either the manual makes no sense at some point or a situation arises that wasn’t envisaged in the manual. Whenever you’re undertaking this sort of exercise, once you start talking to the people who use the system rather than the people who’ve written the manual, you find you’re told, ‘Well, this is what it says we should do in these circumstances, but that wouldn’t work because, so instead we do this…’. People not infrequently make the process work despite what it says in the book. Automate the process and, unless you’ve both listened to these people and built flexibility into the system, you stop them from using their expertise and experience to make the system work and, instead, code failure into it.
This leads us to Oakeshott’s second form of knowledge,
The second sort of knowledge I will call practical, because it exists only in use, is not reflective and (unlike technique) cannot be formulated in rules. This does not mean, however, that it is an esoteric sort of knowledge. It means only that the method by which it may be shared and becomes common knowledge is not the method of formulated doctrine. And if we consider it from this point of view, it would not, I think, be misleading to speak of it as traditional knowledge. In every activity this sort of knowledge is also involved; the mastery of any skill, the pursuit of any concrete activity is impossible without it.
It’s gained by experience; Oakeshott continues,
Technical knowledge can be learned from a book; it can be learned in a correspondence course. Moreover, much of it can be learned by heart, repeated by rote, and applied mechanically: the logic of the syllogism is a technique of this kind. Technical knowledge, in short, can be both taught and learned in the simplest meanings of these words. On the other hand, practical knowledge can neither be taught nor learned, but only imparted and acquired. It exists only in practice, and the only way to acquire it is by apprenticeship to a master–not because the master can teach it (he cannot), but because it can be acquired only by continuous contact with one who is perpetually practising it. In the arts and in natural science what normally happens is that the pupil, in being taught and in learning the technique from his master, discovers himself to have acquired also another sort of knowledge than merely technical knowledge, without it ever having been precisely imparted and often without being able to say precisely what it is. Thus a pianist acquires artistry as well as technique, a chess-player style and insight into the game as well as a knowledge of the moves, and a scientist acquires (among other things) the sort of judgment which tells him when his technique is leading him astray and the connoisseurship which enables him to distinguish the profitable from the unprofitable directions to explore.
This maybe sounds more mysterious than I think it is; it’s what we mean by experience and judgment in the job, and, in the context of my financial services example, it’s what a trained and experienced loans advisor (or whoever) brings to bear when assessing a loan application. In a well-designed financial services application, you’ll have rules that’ll automatically approve some loans — they meet all the bank’s criteria, and the computer can recognise this — and there will be some that are automatically rejected because you’ve got a hard and fast rule that this must happen (the applicant is an undischarged bankrupt, for example).
In some circumstances, the programme can prompt the operator with products that she can offer the applicant; ‘Sorry, but we can’t lend that amount as an unsecured loan, but since you’ve told me you own your own home, would you be interested in a secured loan?’ or ‘Sorry, but the most we can lend you over such and such a period, based on what you say is your income, is so much; would you like to borrow that or would you like to borrow the original amount over a longer repayment period?’ But if the programme is any good, it will have to allow a ‘Refer to Supervisor’ option whereby the supervisor can use her discretion to agree (or not) to the loan, possibly on special terms; you may well say that you won’t lend money to anyone whose score comes back from Experian with more than a certain number of black marks on it, but that there’s a grey area where you’ll consider the application.
You need, at this point, to involve an experienced advisor not because there’s something about the application humans can see that computers can’t but because a human who’s experienced in the business, who knows the company’s policies and has seen them in practice in various situations and who compare the present situation with similar ones in the past, has access to a whole mass of information about handling applications that, while in theory there’s nothing to stop you adding them to the body of rules, in practice won’t have been incorporated into the computer’s rule base because no one will have bothered to sit down and work out every possible permutation of circumstances that could arise in real life and, probably, no one has been able to — no one has thought it necessary to, while there are still people about — try to work out the algorithms to represent the decision-making process that goes on when you’re considering, ‘do I take a chance on this application?’ The decision maker probably doesn’t herself know (though she probably could explain why she took the particular decision if she thought about it) exactly what’s swayed her one way or the other.
Easier by far to identify someone in the organisation who’s got a good track record at this sort of thing and have them train up others at handling these applications.
The problem is, though, when you’re automating processes, that this kind of practical knowledge isn’t the kind, for the reasons I’ve just explained, that can easily be coded. And, in Oakeshott’s words,
Rationalism is the assertion that what I have called practical knowledge is not knowledge at all, the assertion that, properly speaking, there is no knowledge which is not technical knowledge. The Rationalist holds that the only element of knowledge involved in any human activity is technical knowledge, and that what I have called practical knowledge is really only a sort of nescience which would be negligible if it were not positively mischievous. The sovereignty of reason: for the Rationalist, means the sovereignty of technique.
People involved in building successful business process software know that practical knowledge is very important; it’s just that it’s very difficult to code and, unless you work very closely with the people whose work you’re trying to automate, it’s very difficult to know where practical knowledge comes into play in the process.
Certainly, though, the worst possible thing you can do — and I offer you this for free — is follow the practice of many government departments and just hand a skilled team of outside experts several ring-binders containing what you take to be the full spec for whatever it is your department does, with the instruction to come back in a year’s time when they’ve written the programmes. Inevitably, their understanding of what you asked them to do won’t quite be yours and, equally inevitably, your business process doesn’t actually work the way you thought it does, with the inevitable result that by faithfully following your instructions, the team have ended up building bad practice into the application.
Quite apart from this is problem that people tend to see ‘new computer systems’ as a magic solution to problems. Obviously, they’re not. Properly designed and implemented, they’re very good tools but you need to be reasonably sure in the first place what you want them to do before you start to build them. I say ‘reasonably sure’ because it’ll not infrequently become clear during the development process — if, that is, the designers are doing their job properly and working with your staff — that the problem you want solving has a somewhat different cause from the one you thought you’d identified.
It’s all very well to say, ‘We need more and better information and the computer can provide it’. Yes, it can, but most often the problem you’re trying to address is neither the quantity nor the quality of your data but what you actually do with it once you’ve got it. I’ve alluded in previous posts to the Soham school murders and the murder of Victoria Climbié; neither of these, as the official reports on the events leading up them made clear, were the result of insufficient information. They were the result of people not acting on the information they had or not collecting the information they were supposed to. Certainly the Soham murder could have been prevented — if we’re looking for IT fixes — by having a system in place in the Personnel Department that wouldn’t let you hire people without carrying out the checks you were supposed to, and which helpfully printed out (or emailed to someone) all the relevant information from the job application form you were supposed to be checking. And, at least from Sir Michael Bichard’s report, it would have been a great help had someone explained properly to the officers using Humberside’s IT system how they were supposed to use it, so data was correctly recorded and coded in the first place.
However, for governments, the magic new computer is a powerful temptation. Something goes badly wrong and there are demands that ‘something must be done’ to ensure it doesn’t go wrong in future. Announcing a new IT system is a very convenient way of kicking the problem into touch for a year or eighteen months — you’re doing something about the problem, so let’s move on (as the Dear Leader would say) and stop worrying about it. With any luck, by the time it’s delivered, the Minister will be a new job and it’ll be someone else’s problem; all things being equal, things won’t go catastrophically wrong again for a few years after the system’s been delivered, anyway, so when they do your successor can just announce the old one was very good but now he clearly needs a newer and better system.
Meanwhile, the new computer system will almost assuredly be undergoing some sort of mission creep. Someone is bound to say, while you’re developing it, ‘wouldn’t it be useful if, as well as doing this, it can also do that…?’ Well, yes; quite possibly it would be useful if it did that, too, but we didn’t quote for it to do whatever that is, and we’re certainly not going to be able to deliver it on time if we try to build that onto the original spec. The temptation is, of course, to agree to whatever’s proposed, particularly if it’s a junior minister who’s had the bright idea and if you’re being offered scuds of extra money to add that onto the project, but it ends in tears by bed-time.
There was one quite recent example of a London local authority’s housing department being reduced to an even more complete melt-down than it was already in by the installation of what should have been a virtual clone of a system for processing council housing rent payments, applications, council tax payments, housing benefit and council tax benefit and requests for repairs that was already in place in some other London boroughs and provincial cities. The major problem for the project was that the particular borough’s records were in such a shambles to start with, so it was proving very difficult to put clean data on the system for when they started. That was just about under control when the borough, in a fit of corporate insanity, agreed to let their new system be adapted to try out some government measures to tackle housing benefit fraud; that had just been identified as a problem by some report, the minister had to take action, and here was a handy way of doing it — ‘We’re testing a new computerised system to prevent it’.
At that point, the developers should have pulled the plug, saying that they hadn’t quoted on this basis and that, while they’d be happy to build the required anti-fraud system to run on the back of the system on which they were working, they didn’t want to know until they’d finished the current project. Unfortunately for all concerned, they didn’t, and ended up delivering a complete monstrosity that utterly buggered the council’s whole housing benefit system for several months and stuck millions of pounds of rent into suspense accounts while people tried to work out whose rent it was.
The problems were many, but essentially they’d taken a completely untried anti-fraud process which, as they rapidly discovered, was unworkable in practice and hard-coded it into a system with which they were having difficulties in the first place. This they were having difficulties with not because there was anything necessarily wrong with the software or the processes it modelled but because they were trying to implement it in an environment where things had gone badly wrong in the first place; the software, as I say, was pretty much a clone of something that was already up and running in several other places. However, it was supposed to stop you from getting into a mess, not to get you out of one that you’d created for yourself.
tags: UK, Government, IT systems, managerialism, Michael Oakeshott