Thursday, September 26, 2013

Critical Factors to Determining Criticality of Your Asset Base

Today, thanks to a conversation I was having yesterday with a client here in the United Kingdom, I thought I would pontificate on the science of criticality. Here are three quick points to think about when you are setting up your equipment criticality process.
First ABC or 123 is not enough.
The idea for criticality is that you are listing your assets in order of importance to your business goals. The more granularity you provide then the easier it becomes to use the list for things like:
Planning, sequencing, and scheduling (order of job plan development and level of plan detail etc.)
Materials stocking and spares (critical equipment may have critical parts or spares)
Equipment maintenance plan development (level of detail and techniques applied)
If all of your assets are in three or five "buckets" of criticality you can run into issues. For example lets say you have 1000 assets and five levels of criticality, but in the mind of your facility staff you have a process driven plant with assets in series so almost nothing is unimportant. Because of this, your team scores few low criticality assets in the 1 (lowest) criticality. Now you have say 970 assets in four levels of criticality with most of them on the upper end but you reserved the highest level for safety or environmental assets. Now you have 920 assets spread across 3 levels. Even if they are spread evenly you will have 30 percent of your assets in each category. This is just not good enough to facilitate good business decisions. So what can you do? Try at least a 100 level criticality and better case is that you use a 1000 point scale. The idea is to get the assets spread apart across the range so that when you use criticality in decision making it does not give you buckets of assets but instead just a few assets at each level.
The second thought is that your criticality criterial must disperse the assets out across the range. You should have at least 10 criteria. They should include things like:
Spare parts availability
Historical reliability
Importance to the process
Safety, health, and environmental effect
And others.
Third, this was an interesting though provided by the client who was studying their spare parts supply chain, you should take the spare part availability to the next level and think about supply chain risk. As many countries close down and off shore the manufacture of spare parts and equipment your risk can go up.  For example ten years ago here in England if you needed a spare part for a mill there might be three suppliers here in country that could manufacture and provide that part with in a few weeks maximum. Now it is only made and stocked in India or Japan and the local manufactures are gone. Think about the earthquakes and tsunamis that have effected Japan and the wars that have effected other supplier regions. This puts the facility at risk raising the criticality of that asset because if it breaks and the only spares are in an unstable part of the world. It could be months or even years before that part is available. Because of this risk we may want a factor to raise criticality to the point that critical spares are kept onsite for this machine.
So there are three things to think about as you ponder the set up of your criticality process. I'm sure you will think of others.

Monday, September 9, 2013

The Changing Face of Training: Education through Application

Whether it is quality, safety, leadership, asset management, or reliability training the expectations are changing. The days of training for training sake are quickly passing us by. As companies focus more on getting results and a return on investment from their training classes the industry leaders are not just using traditional face to face lecture based classes. To super-charge their education efforts they are combining multiple medias and delivery methods as well as raising the expectations for each students. Below are three areas you might consider making a part of your training efforts.
1. Mix it up: Do not just lecture. Use video, student teach back, e-learning (example here) and simulations (example here) to keep boredom from rearing its ugly head. If they are bored then the material retention will be very low but if they are engaged or even better yet teaching the material then retention will be substantially higher and so will the return on training. I have always said you do not know the material until you have taught it in front of a group of your peers.
2. Expect application: As part of the class set the expectation that the student has to go back and apply the core concepts to their area or plant. For example if they learn about risk analysis then we would expect them to submit a risk analysis template fully populated for the area they are focusing on.
3. Provide coaching and mentoring: If you are going to have the students submit the application of the concept then devote the resources to be there as coaches.  These coaches provide the students with single point lessons, corrections, and good feedback from someone who has been there and done it before.
With these three additions alone the return on your training dollar will be increased and you will be more able to make the changes you want within your organization. We have been able to document 10X returns by using this methodology within our education programs. I hope you can do the same. Feel free to reach out to me at if you would like to discuss it more.

Wednesday, September 4, 2013

Benchmarking and Internal Assesments: Eight Questions That Will Improve Your Studies and Their Results

If you saw last weeks post on the troubles "benchmarkers" deal with here then you may be wondering how do we make it better? If you are addressing the issues mentioned there and you want to take it to the next level here are eight points to consider when assessing your performance and benchmarking with others. These are based on the seventy plus assessments I have been involved in during the last ten years and the struggles I have seen during those and other assessments.
Does your assessment or benchmarking study have the following: 
an element of Volume – Does the assessment have more than just yes or no questions? Does it look into the amount of application of the concept or element or just the existence? For example, does it ask "Do you have planners?" or "What percentage of work is fully planned?" or as a second example "Do you use work orders?" or "What percentage of your craft hours is captured on a work order?" The second question in each provides the element of volume. This demonstrates the penetration of the concept into the way the site does business not just whether they have pockets of excellence in one area.
Detail - Do you have enough data and detail in each of the sections or elements of the assessment? Do you get a complete picture of the components that make up the element being assessed?  For example: Can you address all of the elements of Planning and Scheduling effectiveness with just one question in your study? Not likely. You will need more details and data in order to identify actionable gaps for closure.
Frequency – Are you doing it often enough? If you are only assessing or benchmarking once every ten years that would show little trend data. I would recommend a robust external assessment every three to four years for maximum effect. External does not have to be a consultant driven process you just need fresh eyes from another site or division.
Process Standard – Do you have a process and standards for the benchmark assessment so that the data is of like kind and comparable with other locations?
Personnel Standard – Do your assessors have the knowledge and competency to assess? Are they task qualified in the process standard. Are they standardized? Will the non-task qualified assessors affect the comparability of the data where they are involved? I have seen this over and over. Having a standard for performance and qualifying individuals is crucial to effective benchmarking.
Performance – Is the assessment having the desired effect? Is the assessment providing a view into the gaps that you need to address? Is the study executed at a level that identifies next steps to close the gaps. Is it driving the organization to close those gap? Is the organization better as a whole because they did the study?
Efficiency – Are you doing the study efficiently? Is it a major disruption to the business or can it be accomplished with minimal interruptions. Is your assessment or benchmark study addressing the elements required without being exhaustive? Is it full of manual calculations or does it standardized on base data that is used to generate the various calculations "automagicly"?
Innovation – Are you finding a better way to do it? Have you improved the assessment and benchmarking process each time you have executed it? This could be through data standardization or tool improvements or collected history.
This become the checklist that I review to ensure the highest level of success with benchmarking and assessments. Every time I use this list I am upping the return on effort of the study.

Are you happy with your assessments and benchmarking studies? Tell us about what you like with yours below in the comments section please.