I use the examples of medical devices, clinical trials and health data, to look at the framing of harm through the language of technological risk and failure. Across the examples, there is little or no suggestion of failure by those formally responsible. Failure is seen as arising when harm becomes refracted through calculative techniques and judgments, and reaches a point where the expectations of safety built into technological framings of regulation are thwarted. Technological framings may marginalise the contribution patients, research participants and others can make to regulation, which may in turn underlie harm and lead to the construction of failure. This marginalisation may amount to epistemic injustice. Epistemic injustice and its link to failure, which has normative weight over and above harm, can present a risk to organisational standing and reputation. This risk can be used to improve the knowledge base to include stakeholder knowledges of harm, and to widen responsibilities and accountabilities. This promises to allow regulation to better anticipate and prevent harm and failure, and improve the efficacy and legitimacy of the health research enterprise.