Why identical assets fail differently – and how to fix it

Estimated reading time: 4 minutes

Picture two lines on your factory floor: on line A, the transfer pump runs for 18 months between failures, but on line B, an identical pump runs for just six months. The pumps come from the same vendor, are the same model, and have the same maintenance schedule.

So, what’s different?

You may have theories – for example, line B gets worked harder; they have different operating patterns; or perhaps you received a dodgy batch of parts. But when you pull the utilisation data, it’s near identical.

So, what’s actually different?

The answer isn’t in the pump. It’s in everything around it that the system doesn’t capture – it’s the operating context. You already know something’s different between line A and line B. Once you see these variables tracked, you can prove it – and stop replacing the same components every six months.

The context gap

Condition monitoring tells you the pump is degrading, vibration is climbing, and temperature is drifting. But why is it degrading faster on line B?

That’s the context gap. It’s not maintenance’s fault – it’s a data capture problem baked into how most sites run; a series of hidden variables that change failure behaviour but rarely get logged.

Here’s what that looked like on line B.

Line B ran three washdowns per shift instead of one, was cleaned with different chemicals, and experienced more voltage instability during peak production hours. Higher ambient temperature near the packing line. Start-stop cycles doubled during a product changeover that lasted eight weeks.

However, none of these details made it into the CMMS – not because you didn’t think they mattered, but because there was nowhere to log them.

Once you see these variables tracked, it becomes clear: the pump on line B didn’t fail randomly. It failed predictably, given the conditions it was working in – but those conditions were invisible.

What’s actually different

These gaps show up everywhere. Environment, lubrication, power, process – hidden variables that change how assets behave.

For example, temperature swings, humidity, dust, washdown frequency affect lubrication life, accelerate corrosion, and enable contamination ingress. Bearing failures look random until you track grease type, contamination exposure after washdowns, seal failures following inferior spares batches. Then the pattern appears.

But these factors rarely make it into the failure record.

Or think about process changes. You set your vibration baseline during steady production of product A, then the line switches to product B – different viscosity, different speeds, more start-stops.

Without the process context – what’s actually running, at what duty cycle, with what load – you’re making decisions in the dark. The sensor data looks abnormal, but “normal” just changed.

The pump didn’t change between line A and line B. The context did.

Start small

Interpretation improves massively when you log duty and environment alongside the condition data you already have. You don’t necessarily need more hardware. You need to capture what’s happening around the sensors you’ve already installed.

But you don’t need to boil the ocean. Small, incremental changes that solve real problems can be scaled for impact.

Pick one asset class driving repeat failures – for example pumps, fans, gearboxes, compressors. Choose two context variables that might explain the spread in failure rates and log them for six weeks. It doesn’t need to be complex – it’s about creating structure and discipline.

Then build a signal map – a practical way to link failure modes, condition signals, and the hidden variables that distort what you’re seeing.

What this looks like

A signal map connects three things: the failure mode, the condition signal, and the context that explains it.

For example, on the line B pump, the failure mode is cavitation, the signal is high-frequency vibration spikes, and the context needed is suction pressure, duty state, and valve positions.

On line B, suction pressure was drifting during washdowns – but nobody was tracking it. Once you log that context alongside the vibration data, the pattern becomes obvious. Line B’s repeat failures weren’t random – they were a logical outcome of undocumented changes.

AssetMinder’s platform is built around this principle – condition monitoring sensors combined with analytics that connect sensor data to operational context, supporting first-time fixes instead of repeating the same interventions.

What changes

Once you start adopting this approach, you stop replacing parts that weren’t actually the problem. The root cause becomes visible and repeat interventions drop.

Reliability becomes predictable rather than theatrical, and operations sees fewer recurring stoppages and less quality drift.

And there’s a sustainability angle: machines running outside optimal parameters burn excess energy and often generate scrap. Fix the root cause and both shrink.

We’ve proven this approach across 1,200+ connected assets in manufacturing, food and beverage, aggregates, and utilities.

What recurring failure patterns are you dealing with on your lines? Reply and let us know – we’d be interested to hear what you’re seeing.

Sign up to our newsletter

a mockup of AssetMinder technology on a computer monitor

Transform your business today

Speak to our team to understand how we can drive value for your business.

You might also like