Darrin Wikoff shares the second post in the series on criticality.
WHAT CAN BE LEARNED FROM THE NUMBER?
This is the point where most
asset management processes go wrong. Many models in use today will set a
criticality ranking based solely on the scoring range. For example, an
asset which scores between 75 and 100 may be considered “critical”,
while an asset that scores less than 25 may be “expendable”. This
practice undermines the entire concept of criticality analysis. The
organization might as well give each asset a number from 1 to 5 and call
all things equal. This grouping of scores provides no meaningful data
for establishing or revising asset management plans, nor does it
delineate between “critical” assets to illustrate which assets are
regulatory controlled, mission critical, or simply unreliable.
We
need to recognize that all assets are not created equal. We also need
to remember that the model we are trying to implement is an “analysis”,
which by definition means to scrutinize or examine the data collected to
gain knowledge for the purpose of making intelligent, data-driven
decisions. The results of our analysis should not only identify those
assets that are within the top 20%, but should also indicate the leading
characteristic that makes each asset critical.
Using the Table 1
example, we might conclude that the “No. 12 Cooling Water Pump” is a
critical asset as it falls within the top 20% guidelines, but the score
of ‘80’ alone tells us nothing about how to manage this “critical”
asset. Because we categorized the risk attributes, we are able to
quickly identify that by reducing the consequences associated with a
single-point-failure, through Single Minute Exchange of Die (SMED),
ready service spares, or properly managed critical spares inventory, we
can lower the criticality ranking, allowing Maintenance and Operations
to focus their efforts on the truly unreliable, unpredictable assets.
The last post next week will talk about "managing assets by criticality"
No comments:
Post a Comment