Bridge displacement, particularly due to specific loading events, is increasingly seen as a data signal that potentially contains information useful for bridge management. The lack of a convenient fixed reference location historically made displacement measurement challenging in field conditions, but recent developments in image processing have enabled low-cost, camera-based displacement measurements. Even so, logistical challenges remain. These include organising access to locate the camera, stabilising the camera in windy conditions and managing the large data files that result. Consequently, since accelerometers are widely used, research has continued to investigate recovery of displacement through double integration of acceleration signals. The main challenge with this approach is the errors in the calculated displacement signal due to the low frequency noise. Much of the research to overcome this challenge has involved the use of filters, but results can be highly sensitive to parameter selection and it can be difficult to define the confidence level for the calculated displacement. In this paper, an approach is proposed that does not require the use of filters. The approach, instead, focuses on the quality of hardware used to minimise low frequency noise in acceleration signals, and then implements various quality checks for different bridge and loading scenarios. Sensitivity of the proposed procedure to the choice of start and end points in the acceleration time series data is demonstrated and an approach to overcome this sensitivity has been developed. The robustness of the approach was trialled on three different bridges, subject to three different loading scenarios, that is, a moving truck, a moving train, and pseudo static loading with a truck stopping on a bridge. Where the calculated displacements passed quality checks, they were found to be in good agreement with the directly measured displacements (recorded with Imetrum displacement measurement system). Unsurprisingly, long duration, pseudo-static loading is the most challenging case due to cumulation of low frequency noise effects, where the quality checks correctly identified that the calculated signal would be unreliable.