No matter how hard we work to avoid it, medical devices are dangerous. Some administer controlled harm as their primary effect. Many carry a burden of responsibility for patient health in complex circumstances where precise scenarios are impossible to exhaustively predict. Anatomy is variable. Adjacent devices and human systems are difficult to fully account for. People make errors of judgement.
And, above all and as has been so well documented, the proverbial just happens.
Faced with this memento mori, it’d be understandable if we conscientious, caring and over-analytical engineers just froze, unable to reconcile our goal of helping with the danger of harming. To help us escape analysis paralysis, we have regulation – an attempt to distil complex and unspoken societal philosophy into simple, clear practice. Regulation is great. I mean that sincerely.
Rules rule
A nice, clear standard carrying presumption of compliance gives engineers the assurance that they are operating to the ideals expected of them by society. Stick to the rules and you know that your fellow man considers you to be appropriately balancing the risk inherent in complex engineering with the benefits to be realised from it, and you can sleep easy at night knowing that you’re doing something useful without accidentally becoming a monster*. We rejoice in the assumption that, should medical devices drift into uncomfortable levels of harm (or should society’s tolerance of harm shift), regulation will eventually tighten until products cease to so disappoint. A nice, reliable (if laggy and hysteretic) feedback system – a Linus blanket an engineer can draw on a Post-It, understand and trust in, provided you’re prepared to overlook all those first year lectures about instability and poles in the wrong half of the whatsit, or whatever all that guff was about. It was pretty dull, so it probably wasn’t important.
The trouble is, it’s all statistics. Regulation to balance harm and benefit only works when considered at the level of large numbers of cases. Statistics are of little comfort to individuals or those that must empathise with them.
Warning: Peril ahead
And so to today’s thesis: I speculate that this difficulty of detachment is a source of jeopardy for those working in the industry, especially those of us who live somewhat at arm’s length from the clinical field.
For the sake of our mental health, it’s worth digging into this a little more.
Some rules rule harder than others
Of course, the difficulty with the statistical perspective is true in other safety-critical fields, too. Aerospace, nuclear and automotive are often cited and, as I’ve suggested elsewhere, the systems view which is so bound up with risk management is more mature in those industries. I’ve heard it said repeatedly by career risk managers in those austere sectors that we in medical devices sit decidedly at the cowboy end of things. Some of our practices, even as defined by regulation, do not satisfy their outlook and I think that’s understandable – aside from the fact that their industries deal with risk to innocent, healthy bystanders rather than the already unwell, the economics, expertise of users, immediacy of mitigation and so forth all vary.
No nascent company ever went from blank page to commissioned nuclear facility inside four years. As much as you may point at Tesla as a start-up producing a car in double-quick time, much of the experience of integrating vehicle systems was there to be inherited from existing, long-running development businesses, and that company in particular could well afford to carve steps up its learning curve by hurling sharpened gold bricks at it. I doubt Ford start designing each new SUV by discussing how many wheels it should have all over again. Development of new products in these industries tends towards a pipeline of staggered yet parallel projects with evolving methods of creation and incremental adoption of new technologies. The shoulders of giants positively carpet the ground – you can’t avoid standing on them.
High-tech medical products built by tiny companies can’t be like that. Too many of us have only just left university. Too many of us have never done it before. Organisations are too small and dispersed to enable effective flow of knowledge from experienced heads down to imaginative ones. Economics kills companies that don’t get to market quickly. Whole careers spent in medical are unusual (whereas you can sit at a desk at BAE Systems from cradle to grave). Without a different approach to risk management (among other things), the only people producing new developments would be the big players – the Medtronics of this world. And once you have a monopoly, why would you bother to innovate or improve? If you hunger for advances, you have to tolerate adaptation to these pressures.
There’ll always be a worrier
Personally, coming to medical devices via automotive and aerospace, I experience occasional discomfort with aspects of the way we do risk management as an industry. This shouldn’t be cause for alarm and isn’t to say I feel the need to blow a whistle – everyone’s attitude to risk sits on a continuum (hence the delightful complexities of choosing a pension fund), and I happen to sit to the conservative side of the mean – somebody’s got to, and it’s likely somebody on your team does even if it isn’t you. If everyone did risk management the way I’d like, we’d all spend our lives worrying and, again, only Medtronic would ever get anything out of the door and that rather later than the perhaps ironically-named “patients” would like. Bless us all and our whole “wanting to be well again” thing.
Worriers gonna worry
So, no matter how well you follow the rules, somebody you know is always going to be uncomfortable in hindsight in the face of adverse events, and current industry practice is not actually expected to rule those out. Serious incidents do occur despite (mostly) diligent risk work. Planes fall out of the sky; nuclear reactors get hit by tsunamis. People die on operating tables. Such events may, when viewed up close, seem to be horrific, unacceptable, daunting in the scale of their tragedy, but when viewed in the grand scheme of things we are reminded that air travel is still safer than driving, public demand for nuclear power to be banned isn’t constantly headline news and so on.
Trouble is, you will justify your medical product’s residual risks on a statistical basis but will be faced with any adverse outcome up close and very personal, and, importantly, against the backdrop of being assured you’re a cowboy by the big boys and possibly the public and the press. I will never forget hearing an anecdote from, I think, David Mintz of Auris Health, about being invited as a young engineer to go to the operating table and tell his imminent first-in-man patient that “I designed this thing, and I promise it’s going to be ok”. That requires the insight into your development process to speak confidently. Statistics weaken in the face of the individual face in front of you. If you’re about to cut me open, please don’t say so in your nervous voice.
Death and taxes
For those developing really complex devices, I’ve got bad news: circumstances are decidedly against you. You can never expect to cover all eventualities in risk management of a necessarily complex design in a necessarily complex context. Delve as far as you like – the tree of possibilities is fractal and niches will always exist. For your own sake, you should assume that your device will hurt somebody one day. Perhaps through clear fault, perhaps by tangential involvement in a muddy set of circumstances, but the question will still remain – was I responsible?
The adult thing will be to step back and remember all the good the device is doing, methodically update your risk management and in all likelihood accept that the event is consistent with your justification and nothing has changed. You make your report to the MAUDE database and carry on the good work. But surely the human reaction cannot be so divorced – your senior people will have to come to terms with the fact that their lucrative livelihood is based on something that killed somebody’s mother while they were on the golf course this weekend. Your junior staff will be brought to a sudden realization that it’s not just code, beanbags and pizza in this industry. Your exciting, cutting-edge, dream job morphing into a world of introspection and guilt as you scan the email.
So, is your team prepared? How will you support each other, including the worriers, when this eventuality occurs? How will you react?
The mental shield
I’m lucky enough not to know the answer to this. I’ve never been on the business end of a report like this, but I continue to assume I will be one day – something I’ve been involved with will be the story of the hour, and as a risk management conservative in a fast-paced, economically-challenging industry, I’m well aware I’m going to have a hard time with it. But at least I can have my justification set out in my mind in advance, as far as I am comfortable with it. I believe this is very important – you must not wait to react. Prophylaxis is vital – like death, your soul must be prepared for it in advance because it could come at any moment. You may even have left the company by the time it hits – you should have been handed your shield before you walked out the door, because you aren’t going to be able to go back for one when the need arises.
The only basis I can see for self-insulation, aside from honest, unquestioning faith in a higher power or swivel-eyed, self-interested mania, is maintaining a full understanding of risk management and the logic behind it. You need to be comfortable with it and confident in it. You need to be able to believe in it in spite of the detailed clinician’s report or the pictures in the popular press. Risk management is traditionally seen to protect the wellbeing of patients and the legal standing of device producers, but I’d argue it protects the personal wellbeing of device producers, too.
Importantly, I don’t think this applies only to senior people or those deeply involved in risk work.
Yes, ISO 14971 only says you have to train those responsible for risk management activities, but while it may simplify things to lock your graduates away to write code, insulated from risk management by others dipping in to take responsibility for it, that’s unhealthy for two reasons. First, if you’re a medical devices engineer but consideration of risk is not built into your every move, you’re not a very good medical devices engineer. Second, an incomplete understanding of the foundations of the risk process means you have no psychological protection. Without context, that death must have been your fault because you wrote the code that misbehaved or designed the component that snapped or missed the usability snag. With context, you can see that you’re part of a wider team working within a wider safety architecture and the accident occurred in spite of following best coding practice, the best possible risk analyses, etc.
So I put it to you that, if only from a pastoral perspective, you owe your whole team a proper education in medical risk management, even those not clearly responsible for it. A deep understanding of your processes and the underlying principles provides important personal protection against the eventualities.
Your patients’ bodies are fragile, yes. But consider that some of your staff’s minds may prove so, too.
* It’s worth noting that there are strong arguments that, at a given time, regulation doesn’t actually match society’s expectations properly. For example, I think Sling the Mesh will argue convincingly that the predicates system is not fit for purpose. We have to do the best we can and the more people are fully educated in regulation, the better our chance of picking up problems and getting them corrected.