However, many SMEs are suppliers to larger entities who are pushing for superior quality and world-class process efficiencies from suppliers. It includes five real-world case studies that demonstrate how LSS tools have been successfully integrated into LSS methodology. Simplifying the terminology and methodology of LSS, this book makes the implementation process accessible. Supplies a general introduction to continuous improvement initiatives in SMEs Identifies the key phases in the introduction and development of LSS initiatives within an SME Details the most powerful LSS tools and techniques that can be used in an SME environment Provides tips on how to make the project selection process more successful This book covers the fundamental challenges and common pitfalls that can be avoided with successful introduction and deployment of LSS in the context of SMEs.
Systematically guiding you through the application of the Six Sigma methodology for problem solving, the book devotes separate chapters to the most appropriate tools and techniques that can be useful in each stage of the methodology. Keeping the required math and statistics to a minimum, this practical guide will help you to deploy LSS as your prime methodology for achieving and sustaining world-class efficiency and effectiveness of critical business processes.
But how do you implement Lean Six Sigma, and what does it entaill? Part one gives you all the background you need to understand Lean Six Sigma - what it is, where it came from, what it has done for so many organizations and what it can do for you and your company.
Parts two and three of the book give you a prescribed yet flexible roadmap to follow in selecting, enacting and realizing improvements from Lean Six Sigma projects. Within this step-by-step structure, the authors demonstrate when and how to use the many Lean Six Sigma statistics and 'tools', packing the pages with diagrams, real-life examples, templates, tips and advice.
If you are a Green Belt or a Black Belt, or trainee, these two parts will be invaluable to you. The Complete Idiot's Guide to Lean Six Sigma is the first book of its kind to integrate the Lean Six Sigma tools within a clear stepwise progression, so readers know when and how to actually apply them in their jobs. As such, this book is superior as a companion to any corporate or organizational Lean Six Sigma 'deployment'. No more complex hodgepodge.
This makes an already complex subject seem still complex to the neophyte reader. On the other hand, the structure and progression of this book unfolds Lean Six Sigma in a way that a reader can easily become a user, and move more quickly from knowledge to application. Therefore, using The Complete Idiot's Guide to.
Popular Books. The Becoming by Nora Roberts. This book is available along with other logistics books price. In addition, you can find the best places to read it online for free or for a stipend. As well as other books by this author, you can find free audiobook versions in mp3, youtube, or otherwise. You can find other logistics books price here. The narratives of business books provide a link between current operations and financial reports. Research on specific sectors is valuable for clients in particular industries; accountants can prepare accurate and complete financial reports for public consumption using international research.
You can learn everything you need about international with the help of good books. Very quick and focused. Encourages broad thinking. Similar in function to a fishbone diagram, but more targeted in showing the input-output linkages. Collect data on different types or categories of problems. Tabulate the scores. Also determine the counts or impact for each category. Sort the problems by frequency or by level of impact. Draw a vertical axis and divide into increments equal to the total number you observed.
Draw bars for each category, starting with the largest and working down. Convert the raw counts to percentages of the total, then draw a vertical axis on the right that represents percentage. Plot a point above the first bar at the percentage represented by that bar, then another above the second bar representing the combined percentage, and so on. Connect the points. Interpret the results see next page.
When possible, construct two Pareto charts on a set of data, one that uses count or frequency data and another that looks at impact time required to fix the problem, dollar impact, etc.
You may end up targeting both the most frequent problems and the ones with the biggest impact. Select any cause from a cause-and-effect diagram, or a tall bar on a Pareto chart. Make sure everyone has a common understanding of what that cause means. Why 2 3.
Why 3 4. Sometimes you may reach a root cause after two or three whys, sometimes you may have to go more than five layers down. Name the problem or effect of interest.
Be as specific as possible. Decide the major categories for causes and create the basic diagram on a flip chart or whiteboard. Brainstorm for more detailed causes and create the diagram. See 5 Whys, p. Review the diagram for completeness. Discuss the final diagram. Identify causes you think are most critical for follow-up investigation. This will help you keep track of team decisions and explain them to your sponsor or other advisors. Develop plans for confirming that the potential causes are actual causes.
Identify key customer requirements outputs from the process map or Voice of the Customer VOC studies. This should be a relatively small number, say 5 or fewer outputs. List the outputs across the top of a matrix. Assign a priority score to each output according to importance to the customer. Identify all process steps and key inputs from the process map. List down the side of the matrix. Cross-multiply correlation scores with priority scores and add across for each input.
Create a Pareto chart and focus on the variables relationships with the highest total scores. Especially focus on those where there are acknowledged performance gaps shortfalls.
Part B: Confirming causal effects and results Purpose of these tools To confirm whether a potential cause contributes to the problem. The tools in this section will help you confirm a cause-and-effect relationship and quantify the magnitude of the effect.
In such cases, try confirming the effect by creating stratified data plots p. However, there are times when more rigor, precision, or sophistication is needed. The basic statistical calculations for determining whether two values are statistically different within a certain range of probability. The choice depends in part on what kinds of data you have see table below.
It is an excellent choice whenever there are a number of factors that may be affecting the outcome of interest, or when you suspect there are interactions between different causal factors. See stratification factors, p. Collect the stratification information at the same time as you collect the basic data 3.
Option 2: Color code or use symbols for different strata This chart uses symbols to show performance differences between people from different work teams. Training seems to have paid off for Team D all its top performers are in the upper right corner ; Team C has high performers who received little training they are in the lower right corner.
Confirm the potential cause you want to experiment with, and document the expected impact on the process output. Develop a plan for the experiment. Present your plan to the process owner and get approval for conducting the experiment.
Train data collectors. Alert process staff of the impending experiment; get their involvement if possible. Conduct the experiment and gather data. Analyze results and develop a plan for the next steps. Were problems reduced or eliminated? If the test shows an effect, continue with your regular procedures for planning and testing full-scale implementation. You need to approach quick fixes with an experimental mindset: predicting what changes you expect to see, planning specifically what changes to make, knowing what data you will collect to measure the effect, and so on.
Determine appropriate measures and increments for the axes on the plot — Mark units for the suspected cause input on the horizontal X-axis — Mark the units for the output Y on the vertical Y-axis 3. Plot the points on the chart Interpreting scatter plot patterns No pattern. Data points are scattered randomly in the chart. Positive correlation line slopes from bottom left to top right. Larger values of one variable are associated with larger values of the other variable. Negative correlation line slopes from upper left down to lower right.
Larger values of one variable are associated with smaller values of the other variable. Complex patterns. These often occur when there is some other factor at work that interacts with one of the factors.
Multiple regression or design of experiments can help you discover the source of these patterns. But sometimes you may want to compare two input variables Xs or two output variables Ys to each other. Similarly, we never accept or prove that the alternative is right—we reject the null. To the layperson, this kind of language can be confusing. What you may want to know is that the Z normal distribution is used when the standard deviation is known.
Selecting an alpha of 0. This means that the beta risk is greater. Z-distributions are not covered in this book since they are rarely used in practice. However, because of the Central Limit Theorem p. On the following pages, we show how these calculations are done. Refer to any good statistics textbook for t -distribution tables. Refer to any good statistics text if you need to do these calculations by hand. Example An automobile manufacturer has a target length for camshafts of Printouts from Minitab showing the results of this hypothesis test are shown on the next page.
These coefficients are used to determine whether the relationship is statistically significant the likelihood that certain values of one variable are associated with certain values of the other variable. How much data do you need? If the residuals show unusual patterns, you cannot trust the results. The graph shown on the previous page was generated to depict how the number of pizza deliveries affected how long customers had to wait.
The company can use this equation to predict wait time for customers. The predictor equation proceeds the same as for simple regression p. This model explains R-squared adj is the percent of variation that is explained by the model adjusted for the number of terms in the model and the size of the sample more factors and smaller sample sizes increase uncertainty.
In Multiple regression, you will use R-Sq adj as the amount of variation explained by the model. S is the estimate of the standard deviation about the regression model. We want S to be as small as possible. The P-values tell us that this must have been a hypothesis test. If a p-value is greater than 0. A practitioner might leave the term in the model if the p-value is within the gray region between these two probability levels.
In the following example, the table shows the relationship between different pairs of factors correlations tested among Total Pizzas, Defects, Incorrect Order, Delivery Time on a pairwise basis. Need a valid sampling strategy. Need an acceptable MSA. Rather than relying on the p-values alone, the computer looks at all possible combinations of variables and prints the resulting model characteristics.
The samples may be drawn from several different sources or under several different circumstances. These are referred to as levels. It does not tell us which one s is different.
Select a sample size and factor levels. Randomly conduct your trials and collect the data. Conduct the ANOVA analysis typically done through statistical software; see below for interpretation of results. Follow up with pairwise comparisons, if needed. If the ANOVA shows that at least one of the means is different, pairwise comparisons are done to show which ones are different.
Examine the residuals, variance and normality assumptions. Generate main effects plots, interval plots, etc. Draw conclusions. It becomes a sort of standard of variability that other values are checked against. Further analysis is needed to see there are more than one significant differences. For example, one facility could differ from all the others, or several facilities could differ significantly from each other.
Facility B; Facility A vs. Facility C; Facility B vs. Facility C, etc. Alpha is determined by the individual error rate—and will be less for the individual test than the alpha for the family. See chart on next page. Top number in each set is the lower limit; bottom number is the upper limit.
Degrees of Freedom The number of independent data points in the samples determines the available degrees of freedom df for the analysis. Test this with residuals plots see p. Still, make a habit of checking for constant variances.
It is an opportunity to learn if factor levels have different amounts of variability, which is useful information. The right side shows a difference in times between the three locations California, New York, and Texas. Supplier B, Pass or Fail 2. Collect the data 3. All you need to do is interpret the p-value. Beware of other hidden factors Xs. Factor — A controlled or uncontrolled input variable. Fractional Factorial DOE — Looks at only a fraction of all the possible combinations contained in a full factorial.
If many factors are being investigated, information can be obtained with smaller investment. Full Factorial DOE — Full factorials examine every possible combination of factors at the levels tested. The full factorial design is an experimental strategy that allows us to answer most questions completely. Level — A specific value or setting of a factor.
Effect — The change in the response variable that occurs as experimental conditions change. Interaction — Occurs when the effect of one factor on the response depends on the setting of another factor. Repetition — Running several samples during one experimental setup run. Replication — Replicating duplicating the entire experiment in a time sequence with different setups between each run.
Randomization — A technique used to spread the effect of nuisance variables across the entire experimental region. Use random numbers to determine the order of the experimental runs or the assignment of experimental units to the different factor-level combinations.
Resolution — how much sensitivity the results have to different levels of interactions. Run — A single setup in a DOE from which data is gathered. Minitab and other programs can calculate the higher-order effects, but generally such effects are of little importance and are ignored in the analysis. Planning a designed experiment Design of Experiments is one of the most powerful tools for understanding and reducing variation in any process. Define the problem in business terms, such as cost, response time, customer satisfaction, service level.
Identify a measurable objective that you can quantify as a response variable. Identify input variables and their levels see p. What resources will it take? Perform an experiment and analyze the results. What was learned? What is the next course of action? Carry out more experimentation or apply knowledge gained and stabilize the process at the new level of performance. Mean and standard deviation? Classify each as one of the following: 1 Controllable factor X — Factors that can be manipulated to see their effect on the outputs.
Decide how to address these in your plans see details below. Ex: weather, shift, supplier, user, machine age, etc. Will it require excessive effort or cost? Would it be something you would be willing to implement and live with?
Ex: Operator skill level in a manufacturing process Ex: Friendliness of a customer service rep Tips for treating noise factors A noise or nuisance factor is a factor beyond our control that affects the response variable of interest. They are conservative, since information about all main effects and variables can be determined.
You have to acknowledge that any measured main effects could be influenced by 3-way interactions. Since 3-way interactions are relatively rare, attributing the measured differences to the main effects only is most often a safe assumption.
This design would not be a good way to estimate 2-way interactions. Interpreting DOE results Most statistical software packages will give you results for main effects, interactions, and standard deviations.
Lines with steeper slopes up or down have a bigger impact on the output means than lines with little or no slope flat or almost flat lines. The other lines seem flat or almost flat, so the main effects are less likely to be significant. Here, Design has much more variation one level than at the factors so you can expect it to have much more variation at one level than at the other level.
Practical Note: Moderate departures from normality of the residuals are of little concern. We always want to check the residuals, though, because they are an opportunity to learn more about the data. Focus on removing non-value-add work through variation reduction see Chapter 7 and use of the following tools covered in this chapter: — Setup reduction, total productive maintenance, mistake proofing, process balancing 3.
This is a basic technique that should be used in every workspace. See process maps in Chapter 3 or work cell optimization p.
Basic Lean concepts Process Lead Time also called process cycle time, total lead time, or total cycle time : the time from when a work item product, order, etc.
Ex: There were refinance applications in process at the end of the month Average Completion Rate Exit Rate or Throughput : The output of a process over a defined period of time. Capacity: The maximum amount of product or service output a process can deliver over a continuous period of time.
Ex: The capacity of our process is mortgage applications per day Takt Rate customer demand rate : The amount of product or service required by customers over a continuous period of time.
Processes should be timed to produce at the takt rate. Any lower and you will be disappointing customers; any higher and you will be producing output that cannot be used.
Ex: The takt rate for mortgage applications is applications per day Time Trap: Any process step activity that inserts delay time into a process. Ex: data entry clerks gather up all mortgage applications for an entire day before entering them into the computer system—this causes delays for the mortgages received during the day, which is a time trap Capacity Constraint: An activity in the process that is unable to produce at the completion exit rate required to meet customer demand takt rate.
Ex: Property appraisers can evaluate properties per day, but customer demand is currently applications per day—appraisers are a capacity constraint Value-add VA time: any process step or activity that transforms the form, fit, or function of the product or service for which the customer is willing to pay Ex: The sum of the value-add times in the mortgage refinancing process is 3.
Customers would be willing to buy a product or service that did not have these costs if it meant a lower price. Metrics of time efficiency The purpose of the tools in this chapter is to improve how time and energy are spent in a process. The three metrics described here can help you identify the sources and impact of inefficiency.
You can find these opportunities with a value stream map p. WTT is important in improvement efforts because it helps highlight which process step time trap to work on first. Focus on identifying time traps if the goal of your project is to improve efficiencies in inventory, lead time, output rates, etc.
Work on the time trap that is injecting the most amount of delay into your process first. Focus on identifying capacity constraints if the goal of your project is to increase output to meet real customer demand. Relying on intuition will lead you astray. Review Batch Sizing principles on p. An example tag is shown below. GOAL: To arrange all needed work items in line with the physical workflow, and make them easy to locate and use 1.
Draw a current-state map Show the location of all materials, supplies, forms, etc. S3: Shine Shine emphasizes removing the dirt, grime, and dust from the work area. This is a program of keeping the work area swept and clean of debris. Cleanliness — Sweep floor, place tools on shadowboard 2. Assign responsibility for completing housekeeping chores. Create procedures for continued daily shine processes Create a table that shows which housekeeping tasks must be performed, how often, and by whom.
Set periodic inspection and targets for machinery, equipment, computers, furnishings, etc. S4: Standardize Standardize means creating a consistent way of implementing the tasks performed daily, including Sort, Set in order, and Shine. S5: Sustain Sustain means that the 5S program has a discipline that ensures its continued success.
Create 5S audit form or radar chart for summarizing results. Establish a company standard for the format. When to use a Generic Pull System Whenever lead times are critical to satisfy customers and when non-value-add cost is significant compared to value-add cost. Identify target PCE. Calculate target lead time for the process. Therefore, you need a plan to reduce current WIP and to release work into the system to match the exit rate.
Count the WIP in your process 2. Create a triage system for determining the order in which future work will be released into the system Option 1: First-In, First-out FIFO — whatever comes in first gets processed first.
Option 2: Triaging— working on highest-potential items first. Not all customer requests or orders, for example, represent the same level of potential for your company.
You need to set up criteria for rating or ranking new work requests so you can tell the difference between high- and low-potential requests. This is often used in sales and other service applications.
See Queue time formula, p. How to create a replenishment pull system 1. So often a combination of factors should be considered. This method is based on empirical computations and experience. Supply room staff replaces stocked items and switches RED card to items just replaced.
Attached cards to magnetic strip labels for ease of movement. To have a Lean system operating at peak efficiency with lowest cost, you should compute the minimum safe batch size from the formula shown here. Assumption: all products have the same demand and process parameters. There is a more sophisticated version of the equation protected by U.
Tool 4. Then ask why you have to stop, and figure out how to eliminate that source of delays or interruptions. Ex: Have computer programs compile orders every evening so all that all the information is waiting for order processors the next morning Ex: Use Replenishment Pull systems p. Do studies to get data on what settings are best under what conditions, what procedures result in most accurate part placement, etc.
The language is a bit different, however. Step 1. Useful definitions Preventive Maintenance: maintenance that occurs at regular intervals determined by time Ex: every month or usage Ex: every units Predictive Maintenance: maintenance performed on equipment based on signals or diagnostic techniques that indicate deterioration in equipment Both are common sense approaches for proactively maintaining equipment, eliminating unscheduled downtime, and improving the level of cooperation between Operations and Maintenance.
Planned down time Breaks, meeting, Prev. Clean machine thoroughly done by all team members — Remove debris and fix physical imperfections — Thoroughly degrease — Use compressed air for controls — Change filters, lubricants, etc. Place a color-coded tag or note on areas requiring repair. Record all needed repairs in a project notebook. Review defect tags from Phase 1 2. Provide for early detection of problems by training operators in preventive and predictive maintenance techniques PMs — Operators must be trained on all prescribed PMs — Operator is responsible to perform PMs as documented — Production Supervisor to insure PMs are effective 2.
Install visual controls see p. Implement 5S housekeeping and organization see p. Ex: Machine operations that make it very difficult or impossible to produce a defective product. Does not require human assistance. Ex: Electronic checklist built into a process Mistake proofing is making it impossible for errors to be passed to the next step in the process.
Ex: Devices or systems that either prevent the defects or inexpensively inspect each item to determine whether it is defective Ex: Software programming that makes is impossible to move onto the next step until all information is entered into a form When to use mistake prevention and mistake proofing Use when rework to correct errors or process delays downstream perhaps caused by a lack of material or information are hurting Process Cycle Efficiency.
Two mistake-proofing systems A. Describe the defect and its impact on customers 2. Identify the process step where the defect is discovered and the step where it is created 3. Detail the standard procedures where the defect is created 4. Identify errors in or deviation from the standard procedure 5. Investigate and analyze the root cause for each deviation 6.
Brainstorm ideas to eliminate or detect the deviation early 7. Create, test, validate, and implement mistake-proofing device Process balancing design principles If overall lead time suffers because work is improperly balanced see takt time chart on p. When to use work cell optimization Whenever you have inefficient workflow too much movement of people or materials. Decide where raw materials and WIP inventory will be located 3.
Apply operational improvements to reduce batch sizes 5. Apply line balancing principles see p. Can help set priorities for training and help staff know whom to consult with a particular question.
For example, the value stream map of page 45 followed only one high-volume product family down the process flow. But often the biggest contributor to non-value-add time and costs is the variety of products and services and especially the impact of low- volume offerings.
The purpose of these tools is to help you diagnose and quantify complexity opportunities in your business unit or value stream. For more information on complexity analysis, and for the strategic corporate-wide approach, refer to Conquering Complexity in Your Business, McGraw-Hill, Use as prework for a complexity value stream map to allow you to represent full diversity without mapping every single product or service through every process step.
Done as a check to prevent eliminating products or services that can be made less complex with little effort. List the subprocesses in a business unit, or within your project value stream, across the top of a matrix 2. List each product or service down the side 3. Sort your products or services into families based on the similarity of process flow — You can also include other factors such as processing time per unit — The grid on the previous page from a financial services company identified four separate families of services.
But they decided to keep the equity loans as a separate family because the volume is so much lower than either Family A offering. Family B options all require Inspections; Family D is the only offering that does not require an Appraisal.
While you can cluster several low-volume offerings into a single family do NOT cluster them with higher-volume offerings. These data are inputs to the Complexity Equation. Use the instructions for creating a value stream map see p. Use a unique symbol to represent each product or service family. The example below shows three families from the financial services company.
It should be used to determine what factors are contributing to low PCE levels. For example, it assumes no variation in demand or processing times.
In this form, it is fully adequate to provide useful insight. A much more sophisticated version can be built using the full equations to simulate the more complex case: see Conquering Complexity in your Business McGraw-Hill, for more details.
Complexity matrix Purpose To uncover patterns and understand whether the biggest issue relates to a particular product or service family, or to the process through which a product or service flows. When to use a complexity matrix Use a Complexity Matrix after complexity value stream mapping to analyze the nature and degree of PCE destruction.
How to create a complexity matrix 1. Calculate PCE Destruction see next page for each offering 2. Enter the data into a matrix like that shown below. Interpreting the results Matrix with a process problem If a process problem is indicated, corrective actions may include traditional Lean Six Sigma actions, such as improving quality, reducing lead time or WIP, etc. Options include stratifying the offering or increasing commonality reducing the number of parts, assemblies, suppliers, forms, etc.
Other options include diverting products through a different and lower-cost or separate delivery channel; improving the design or configuration; or potentially outsourcing or eliminating the offering. PCED is 30 for the low-volume offering vs.
Locate and compare any documentation that lists parts, materials, process steps, etc. Look for hidden commonality between the different products 3. Identify ways to increase commonality reducing part numbers, assemblies, suppliers, etc. They discovered the two frames shared all but four parts, three of which added no form, function, or feature to the frame.
The company found areas of commonality that could be exploited to simplify product design and manufacture which reduces non-value-add work and will translate into improved PCE. Identify a likely course of action for attacking a specific complexity problem addressing either a process issue or a product issue.
Determine how this change would affect the PCED numbers in the complexity matrix. Standardized components? Reduced setup time? Eliminated a product or redirected it through a different channel? Example 1: The impact of removing a process step Using CVSM data, a company decided to see what would happen if they removed a process step that injected a lot of non-value-add cost into the process. Example 2: The impact of removing an offering family This company also identified one product family that consisted of only two products.
They want to investigate the impact of simply stopping production of that family, thus simplifying their product portfolio. Often, the best potential solutions emerge toward the end of intense creativity efforts, when everybody has many details fresh in mind related to the problem at hand. Whenever possible, synthesize by combining the best characteristics from alternative options to generate a single stronger solution.
Be sure to develop and use evaluation criteria, see below. Work through it once to generate some ideas, evaluate those ideas, then brainstorm off the best options or newer ideas to see if you can develop even better solutions. The most visible? Remove show stoppers from your list of alternative solutions. Solutions with components that would prohibit their implementation should be removed prior to performing additional analysis.
Consider organization fit for each remaining idea. The solution must be capable of obtaining management commitment, and fit with customer needs, strategic objectives, organizational values, and the organization culture. Eliminate any options that do poorly on questions such as: Management commitment—can you develop support for this idea? Strategic factors and organizational values—is this idea consistent with our one- and two-year goals? Operating and management systems—does the potential solution complement or conflict with our decision-making, accounting, communication, and reward systems?
Determine project goal impact for each remaining idea. Each potential solution must be evaluated on its ability to reduce and eliminate the root causes of poor performance.
0コメント