Tuesday, June 16, 2009

QA & Six Sigma Dict :


QA & Six Sigma Dict :

Acceptance Number


The highest number of nonconforming units or defects found in the sample that permits the acceptance of the lot.

Accessory Planning


The planned utilization of remnant material for value-added purposes.

Accuracy

Accuracy refers to the variation between a measurement and what actually exists. It is the difference between an individual's average measurements and that of a known standard, or accepted 'truth.'

Accuracy also refers to Precision or exactness or conformity to fact.

Affinity Diagram


A tool used to organize and present large amounts of data (ideas, issues, solutions, problems) into logical categories based on user perceived relationships.

Alpha risk


Alpha risk is defined as the risk of accepting the alternate hypothesis when, in fact, the null hypothesis is true; in other words, stating that a difference exists where actually there is none. Alpha risk is stated in terms of probability (such as 0.05 or 5%)

Alternative Hypothesis (Ha)


The alternate hypothesis (Ha) is a statement that the observed difference or relationship between two populations is real and not due to chance or sampling error. The alternate hypothesis is the opposite of the null hypothesis (P <>

ANOVA(Analysis Of Variance)


Analysis of variance is a statistical technique for analyzing data that tests for a difference between two or more means. See the tool 1-Way ANOVA

Analysis of variance, a calculation procedure to allocate the amount of variation in a process and determine if it is significant or is caused by random noise. A balanced ANOVA has equal numbers of measurements in each group/column. A stacked ANOVA: each factor has data in one column only and so does the response

APQP


Advanced Production Quality Planning
Advanced Product Quality Planning

AQL

Acceptable Quality Level. Also referred to as Assured Quality Level. The largest percentage of defectives that can make the lot definitely acceptable; Customer will definitely prefer the zero defect products or services. There is only one ideal acceptable quality level - zero defects - all others are compromises based upon acceptable business, financial and safety levels.

Assurance


Establishing and maintaining a degree of confidence internally and externally.

Alternate definition:
Establishing and maintaining the commitments made to Internal and External Customers.

Attribute Data

As opposed to continuous data (quantitative) like time, money, length, etc., attribute data (also called discrete data or qualitative data) is classified as either good or bad, success or failure, off or on. It's binominal data that isn't easily analysed with six sigma tools (which have been designed more for the manufacturing area with its quantitative processes). Many tools and tests can't be performed.

Audit

A timely process or system, inspection to ensure that specifications conform to documented quality standards. An Audit also brings out discrepancies between the documented standards and the standards followed and also might show how well or how badly the documented standards support the processes currently followed.
Corrective, Preventive & Improvement Actions should be undertaken to mitigate the gap(s) between what is said (documented), what is done and what is required to comply with the appropriate quality standard.

Average Incoming Quality


AIQ - Average Incoming Quality: This is the average quality level going into the inspection point.

Average Outgoing Quality

AOQ - Average Outgoing Quality: The average quality level leaving the inspection point after rejection and acceptance of a number of lots. If rejected lots are not checked 100% and defective units removed or replaced with good units, the AOQ will be the same as the AOQ

Bar Chart

A bar chart is a graphical comparison of several quantities in which the lengths of the horizontal or vertical bars and represents the relative magnitude of the values.

BAU


"Business As Usual" The old way of doing business, considering repetitive tasks with no critical sense of improvement

Benchmarking


The concept of discovering what is the best performance being achieved, whether in your company, by a competitor, or by an entirely different industry.

Benchmarking is an improvement tool whereby a company measures its performance or process against other companies' best practices, determines how those companies achieved their performance levels, and uses the information to improve its own performance.

Best Practice


A lesson learned from one area of a business that can be passed on to another area of the business or between businesses.

Beta risk


Beta risk is defined as the risk of accepting the null hypothesis when, in fact, the alternate hypothesis is true. In other words, stating no difference exists when there is an actual difference. A statistical test should be capable of detecting difference

Bias


Bias in a sample is the presence or influence of any factor that causes the population or process being sampled to appear different from what it actually is. Bias is introduced into a sample when data is collected without regard to key factors that may in

Black Belt


Full time team leaders responsible for implementing process improvement projects (dmaic or dmadv) within the business -- to drive customer satisfaction levels and business productivity up.

The Black Belt represents the beginning, the start of a never ending journey of discipline, work, and the pursuit of an ever-higher standard

Black Noise


Special cause variation in a process and not anything else.

Blocking


Blocking neutralizes background variables that can not be eliminated by randomizing. It does so by spreading them across the experiment

Boxplot
A box plot, also known as a box and whisker diagram, is a basic graphing tool that displays centering, spread, and distribution of a continuous data set

Brainstorming


A method to generate ideas. Groundrules such as -no idea is a bad idea- are typical. Benefit of brainstorming is the power of the group in building ideas of each others ideas.

Business Value Added


A step or change made to the product which is necessary for future or subsequent steps but is not noticed by the final customer.

Business Process Quality Management


Also called Process Management or Reengineering. The concept of defining macro and micro processes, assigning ownership, and creating responsibilities of the owners.

Capability Analysis


Capability analysis is a MinitabTM tool that visually compares actual process performance to the performance standards..

Cause


A factor (X) that has an impact on a response variable (Y); a source of variation in a process or product

Cause and Effect Diagram


A cause and effect diagram is a visual tool used to logically organize possible causes for a specific problem or effect by graphically displaying them in increasing detail. It helps to identify root causes and ensures common understanding of the causes.

Center


The center of a process is the average value of its data. It is equivalent to the mean and is one measure of the central tendency.

Center points


A center point is a run performed with all factors set halfway between their low and high levels. Each factor must be continuous to have a logical halfway point. For example, there are no logical center points for the factors vendor, machine, or location

Central Limit Theorem


The central limit theorem states that given a distribution with a mean m and variance s2, the sampling distribution of the mean approaches a normal distribution with a mean and variance/N as N, the sample size, increases.

The central limit theorem explains why many distributions tend to be close to the normal distribution.


Central Tendency


The numerical average (e.g. mean, median or mode) of a process distribution. Can also be displayed as the centerline of a process control chart.

An indication of the location or centrality of the data. The most common measures of central tendency are: mean (numerical average), median (the midpoint of an order data set such that half of the data points are above and half are below it) and the mode (the value that occurs most frequently)

Champion
Business leaders and senior managers who ensure that resources are available for training and projects, and who are involved in project tollgate reviews.

Characteristic
A characteristic is a definable or measurable feature of a process, product, or variable.

Charter


A document or sheet that clearly scopes and identifies the purpose of a Quality improvement project. Items specified include background case, purpose, team members, scope, timeline.

Chi Square Test


The Chi Square Test is a statistical goodness-of-fit-test used to test the assumption that the distribution of a set of data is similar to the expected distribution, such as a normal distribution.

A chi square test, also called "test of association," is a statistical test of association between discrete variables. It is based on a mathematical comparison of the number of observed counts with the number of expected counts to determine if there is a difference.

Common Cause Variation


Common cause variation is fluctuation caused by unknown factors resulting in a steady but random distribution of output around the average of the data. It is a measure of the process potential, or how well the process can perform when special cause variation removed.

Common cause variability is a source of variation caused by unknown factors that result in a steady but random distribution of output around the average of the data. Common cause variation is a measure of the process's potential, or how well the process can perform when special cause variation is removed. Therefore, it is a measure of the process technology. Common cause variation is also called random variation, noise, no controllable variation, within-group variation, or inherent variation. Example: Many X's with a small impact.

Confidence band (or interval)


Measurement of the certainty of the shape of the fitted regression line. A 95% confidence band implies a 95% chance that the true regression line fits within the confidence bands. Measurement of certainty.

Confidence Interval


How "wide" you have to cast your "net" to be sure of capturing the true population parameter. If my estimate of defects is 10%, I might also say that my 95% Confidence Interval is plus or minus 2%, meaning that odds are 95 out of 100 hundred that the true population parameter is somewhere between 8 and 12%.

Dashboard


A dashboard is a tool used for collecting and reporting information about vital customer requirements and your business's performance for key customers. Dashboards provide a quick summary of process performance

Data


Data is factual information used as a basis for reasoning, discussion, or calculation; often this term refers to quantitative information

Defect
A defect is any nonconformity in a product or process; it is any event that does not meet the performance standards of a Y.

Defective


The word defective describes an entire unit that fails to meet acceptance criteria, regardless of the number of defects within the unit. A unit may be defective because of one or more defects.

Defects Per Unit – DPU


DPU or Defects Per Unit is simply as the nomenclature would suggest. How many defects can we expect to find in a single unit based on our process capability? Consider 100 loan applications. Also consider that there are 3 opportunities for error/defect in each application. If out of the 100 loans applications there are 30 defects, the FTYis .70 or 70 percent. Further investigation finds that 10 of the 70 had to be reworked to achieve that yield so our Rolled Throughput Yield is 100-(30+10)/100 = .6 or 60 percent yield.

To consider the defects per unit in this process we divide the number of defects by the result of multiplying the sample by the number of opportunities in each item.

No.of defects/(no. of units)*(no. of opportunities for a defect)= 30/100*3 = 30/300 = .1 or we would say that there is a 10 percent chance for a defect to occur in this process.

Descriptive statistics


Descriptive statistics is a method of statistical analysis of numeric data, discrete or continuous, that provides information about centering, spread, and normality. Results of the analysis can be in tabular or graphic format.

Design for Six Sigma – DFSS


Design for Six Sigma. Same as DMADV (below).

Design of Experiments


1 - Conducting and analyzing controlled tests to evaluate the factors that control the value of a parameter or group of parameters.

2- "Design of Experiments" (DoE) refers to experimental methods used to quantify indeterminate measurements of factors and interactions between factors statistically through observance of forced changes made methodically as directed by mathematically systematic tables.

Design Risk Assessment


A design risk assessment is the act of determining potential risk in a design process, either in a concept design or a detailed design. It provides a broader evaluation of your design beyond just CTQs, and will enable you to eliminate possible failures

Detectable Effect Size


When you are deciding what factors and interactions you want to get information about, you also need to determine the smallest effect you will consider significant enough to improve your process. This minimum size is known as the detectable effect size.

DF (degrees of freedom)


Equal to: (#rows - 1)(#cols - 1)

Discrete Data


Discrete data is information that can be categorized into a classification. Discrete data is based on counts. Only a finite number of values is possible, and the values cannot be subdivided meaningfully. For example, the number of parts damaged in shipmen

Distribution


Distribution refers to the behavior of a process described by plotting the number of times a variable displays a specific value or range of values rather than by plotting the value itself.

DMADV


DMADV is a data-driven quality strategy for designing products and processes, and it is an integral part of a Six Sigma Quality Initiative. DMADV consists of five interconnected phases: Define, Measure, Analyze, Design, and Verify.

DMAIC (Pronounced (Duh-May-Ick).

Define, Measure, Analyze, Improve, Control. Incremental process improvement using Six Sigma methodology.

DOE


A design of experiment is a structured, organized method for determining the relationship between factors (Xs) affecting a process and the output of that process.

DPMO


Defects per million opportunities (DPMO) is the number of defects observed during a standard production run divided by the number of opportunities to make a defect during that run, multiplied by one million.

Defects Per Million Opportunities. Synonymous with PPM.

To convert the DPU to DPMO we simply multiply the DPU by 1,000,000.

DPO
Defects per opportunity (DPO) represents total defects divided by total opportunities. DPO is a preliminary calculation to help you calculate DPMO (defects per million opportunities). Multiply DPO by one million to calculate DPMO.

DPU


Defects per unit (DPU) represents the number of defects divided by the number of products.

Dunnett's (1-way ANOVA)


Check to obtain a two-sided confidence interval for the difference between each treatment mean and a control mean. Specify a family error rate between 0.5 and 0.001. Values greater than or equal to 1.0 are interpreted as percentages. The default error rat

ECO


Engineer Change Order... Engineering changes in procedures that will be implemented in a new revision of a procedure.

ECR


Engineering Change Request...A request or suggestion, by any employee, to Engineering, for an improvement in a process or procedure.

Effect


An effect is that which is produced by a cause; the impact a factor (X) has on a response variable (Y).

Empowerment


A series of actions designed to give employees greater control over their working lives. Businesses give employees empowerment to motivate them according to the theories of Abraham Maslow and Fredrick Herzberg.

To invest with power or give authority to complete. To empower employees.

Entitlement


As good as a process can get without capital investment

Erroneous


As defined by 'Harbour (2002)' an econometric technique that is purposely executed incorrectly to establish the consequences of poor technique.

Error


Error, also called residual error, refers to variation in observations made under identical test conditions, or the amount of variation that can not be attributed to the variables included in the experiment.

Error (type I)


Error that concludes that someone is guilty, when in fact, they really are not. (Ho true, but I rejected it--concluded Ha) ALPHA

Error (type II)


Error that concludes that someone is not guilty, when in fact, they really are. (Ha true, but I concluded Ho). BETA

Facilitate
To make easy or easier. Often referred to as a facilitator or one who makes meetings more efficient.

Factor


A factor is an independent variable; an X.

FMEA - Failure Mode and Effect Analysis


Failure mode and effects analysis (FMEA) is a disciplined approach used to identify possible failures of a product or service and then determine the frequency and impact of the failure. See the tool Failure Mode and Effects Analysis.


A procedure and tools that helps identify every possible failure mode of a process or product to determine its effect on other sub-items and on the required function of the product or process.

First Time Yield – FTY


First Time Yield (FTY) is simply the number of good units produced divided by the number of total units going into the process. For example:

You have a process of that is divided into four sub-processes - A, B, C and D. Assume that you have 100 units entering process A. To calculate FTY you would:

1. Calculate the yield (number out of step/number into step) of each step. 2. Multiply these together.

For Example:

100 units enter A and 90 leave. The FTY for process A is 90/100 = .9

90 units go into B and 80 units leave. The FTY for process B is 80/90 = .89

80 units go into C and 75 leave. The FTY for C is 75/80 = .94

75 units got into D and 70 leave. The FTY for D is 70/75 = .93


The total process yield is equal to FTY of A * FTYof B * FTY of C * FTY of D or .9*.89*.94*.93 = .70.


You can also get the total process yield for the entire process by simply dividing the number of good units produced by the number going in to the start of the process. In this case, 70/100 = .70 or 70 percent yield.

Fishbone


A tool used to solve quality problems by brainstorming causes and logically organizing them by branches. Also called the Cause & Effect diagram and Ishakawa diagram. For more information, view the fishbone section.

Fisher's (1-way ANOVA):


Check to obtain confidence intervals for all pair wise differences between level means using Fisher's LSD procedure. Specify an individual rate between 0.5 and 0.001. Values greater than or equal to 1.0 are interpreted as percentages.

Fits


Predicted values of "Y" calculated using the regression equation for each value of "X"

Fitted value


A fitted value is the Y output value that is predicted by a regression equation.

Fractional Factorial DOE


A fractional factorial design of experiment (DOE) includes selected combinations of factors and levels. It is a carefully prescribed and representative subset of a full factorial design. A fractional factorial DOE is useful when the number of potential factors is relatively large because they reduce the total number of runs required. By reducing the number of runs, a fractional factorial DOE will not be able to evaluate the impact of some of the factors independently. In general, higher-order interactions are confounded with main effects or lower-order interactions. Because higher order interactions are rare, usually you can assume that their effect is minimal and that the observed effect is caused by the main effect or lower-level interaction.

Frequency Plot


A frequency plot is a graphical display of how often data values occur.

Full factorial DOE


A full factorial design of experiment (DOE) measures the response of every possible combination of factors and factor levels. These responses are analyzed to provide information about every main effect and every interaction effect. A full factorial DOE is practical when fewer than five factors are being investigated. Testing all combinations of factor levels becomes too expensive and time-consuming with five or more factors.

F-value (ANOVA)


Measurement of distance between individual distributions. As F goes up, P goes down (i.e., more confidence in there being a difference between two means). To calculate: (Mean Square of X / Mean Square of Error)

Gage R&R


Gage R&R, which stands for gage repeatability and reproducibility, is a statistical tool that measures the amount of variation in the measurement system arising from the measurement device and the people taking the measurement.

Gannt Chart


A Gantt chart is a visual project planning device used for production scheduling. A Gantt chart graphically displays time needed to complete tasks

Gating


Gating is the limitation of opportunities for deviation from the proven steps in the manufacturing process. The primary objective is to minimize human error.

GOAL


1 A goal is a targeted value by a design team while building a quality process/product
2 A goal can also be defined as a customer voice

Goodman-Kruskal Gamma


Term used to describe % variation explained by X

GRPI


GRPI stands for four critical and interrelated aspects of teamwork: goals, roles, processes, and interpersonal relationships, and it is a tool used to assess them.

Green Belt


Business Team Leaders responsible for managing projects and implementing improvement in its own business.

An employee of an organization who has been trained on the improvement methodology of Six Sigma that will lead a process improvement or quality improvement team as *part* of their full time job.

Histogram


A histogram is a basic graphing tool that displays the relative frequency or occurrence of continuous data values showing which values occur most and least frequently. A histogram illustrates the shape, centering, and spread of data distribution and indic


A bar graph of a frequency distribution in which the widths of the bars are proportional to the classes into which the variable has been divided and the heights of the bars are proportional to the class frequencies.

Homogeneity of variance


Homogeneity of variance is a test used to determine if the variances of two or more samples are different. See the tool Homogeneity of Variance.

Horizontalization


The philosophy of turning companies with traditional silo management systems into ones that are Process orientated.

House of Quality


A House of Quality is also called a QFD (Quality Function Deployment).


I-MR Chart


An I-MR chart, or individual and moving range chart, is a graphical tool that displays process variation over time. It signals when a process may be going out of control and shows where to look for sources of special cause variation.


In-Control


In control refers to a process unaffected by special causes. A process that is in control is affected only by common causes. A process that is out of control is affected by special causes in addition to the common causes affecting the mean and/or variance


An “In-Control" process is one that is free of assignable/special causes of variation. Such a condition is most often evidence on a control chart which displays an absence of nonrandom variation.

Independent variable


An independent variable is an input or process variable (X) that can be set directly to achieve a desired output

Intangible benefits


Intangible benefits, also called soft benefits, are the gains attributable to your improvement project that are not reportable for formal accounting purposes. These benefits are not included in the financial calculations because they are nonmonetary or ar

Interaction


An interaction occurs when the response achieved by one factor depends on the level of the other factor. On interaction plot, when lines are not parallel, there's an interaction.

Interquartile Range


Difference between the 75th percentile and the 25th percentile.

The Square root of variance. The standard deviation has the same units as the original data.

Interrelationship digraph


An interrelationship digraph is a visual display that maps out the cause and effect links among complex, multivariable problems or desired outcomes.

IQR


Intraquartile range (from box plot) representing range between 25th and 75th quartile

Ishikawa, Ichiro


Japanese Quality professional widely known for the Ishikawa diagram also known as the fishbone or cause and effect diagram.

ISO 9000 Series of Standards


Series of standards established in the 1980s by countries of Western Europe as a basis for judging the adequacy of the quality control systems of companies.

Just In Time (JIT) Manufacturing


A planning system for manufacturing processes that optimizes availability of material inventories at the manufacturing site to only what, when & how much is necessary.

Typically a JIT Mfg. avoids the conventional Conveyor Systems. JIT is a pull system where the product is pulled along to its finish, rather than the conventional mass production which is a push system. It is possible using various tools like KANBAN, ANDON & CELL LAYOUT.

Kaizen


Japanese term that means continuous improvement, taken from words "Kai" means change and "zen" means good.

Kanban


Kanban: A Japanese term. It is one of the primary tools of JIT system. It maintains an orderly and efficient flow of materials throughout the entire manufacturing process. It is usually a printed card that contains specific information such as part name, description, quantity, etc.

Kano Analysis


Kano analysis is a quality measurement used to prioritize customer requirements.

Kruskal-Wallis


Kruskal-Wallis performs a hypothesis test of the equality of population medians for a one-way design (two or more populations). This test is a generalization of the procedure used by the Mann-Whitney test and, like Mood’s median test, offers a nonparametric

Kurtosis


Kurtosis is a measure of how peaked or flat a curve's distribution is.

L1 Spreadsheet


An L1 spreadsheet calculates defects per million opportunities (DPMO) and a process Z value for discrete data.

L2 Spreadsheet


An L2 spreadsheet calculates the short-term and long-term Z values for continuous data sets.

LCL


Lower Control Limit (note different from LSL): similar to Upper Control Limit (q.v.) but representing a downwards 3 x sigma deviation from the mean value of a variable.

Lean Manufacturing


Initiative focused on eliminating all waste in manufacturing processes.

The Production System Design Laboratory (PSD), Massachusetts Institute of Technology (MIT) http://lean2.mit.edu/ states that "Lean production is aimed at the elimination of waste in every area of production including customer relations, product design, supplier networks and factory management. Its goal is to incorporate less human effort, less inventory, less time to develop products, and less space to become highly responsive to customer demand while producing top quality products in the most efficient and economical manner possible."

Principles of Lean Enterprise:

Zero waiting time

Zero Inventory

Scheduling -- internal customer pull instead of push system

Batch to Flow -- cut batch sizes

Line Balancing

Cut actual process times

Leptokurtic Distribution


A leptokurtic distribution is symmetrical in shape, similar to a normal distribution, but the center peak is much higher; that is, there is a higher frequency of values near the mean. In addition, a leptokurtic distribution has a higher frequency of data

Levels


Levels are the different settings a factor can have. For example, if you are trying to determine how the response (speed of data transmittal) is affected by the factor (connection type), you would need to set the factor at different levels (modem and LAN)

Linearity


Linearity is the variation between a known standard, or "truth," across the low and high end of the gage. It is the difference between an individual's measurements and that of a known standard or truth over the full range of expected values.

Lot


A collection of individual pieces from a common source, possessing a common set of quality characteristics and submitted as a group for acceptance at one time. (Lot size = N).

LSL


A lower specification limit is a value above which performance of a product or process is acceptable. This is also known as a lower spec limit or LSL.

LTPD


LTPD - Lot Tolerance Percent Defective: the value of incoming quality where it is desirable to reject most lots. The quality level is unacceptable. This is the RQL expressed as a percent defective.

Lurking variable


A lurking variable is an unknown, uncontrolled variable that influences the output of an experiment.

Malcolm Baldrige National Quality Award


The annual self-evaluation covers the following seven categories of criteria:

· Leadership
· Strategic Planning
· Customer and Market Focus
· Information and Analysis
· Human Resource Focus
· Process Management
· Business Results

The National Institute of Standards and Technology (NIST), a federal agency within the Department of Commerce, is responsible for managing the Malcolm Baldrige National Quality Award. The American Society for Quality (ASQ) administers the Malcolm Baldrige National Quality Award under a contract with NIST.

Main Effect


A main effect is a measurement of the average change in the output when a factor is changed from its low level to its high level. It is calculated as the average output when a factor is at its high level minus the average output when the factor is at its

Mallows Statistic (C-p)


Statistic within Regression-->Best Fits which is used as a measure of bias (i.e., when predicted is different than truth). Should equal (#vars + 1)

Mann-Whitney


Mann-Whitney performs a hypothesis test of the equality of two population medians and calculates the corresponding point estimate and confidence interval. Use this test as a nonparametric alternative to the two-sample t-test.

Master Black Belt-MBB


Master Black Belts are Six Sigma or quality experts and are responsible for strategic implementations within the business. The Master Black Belt is qualified to teach other Six Sigma facilitators the methodologies, tools, and applications in all functions and levels of the company, and are a resource for utilizing statistical process control within processes.

The MBB should be available to answer questions as an expert resource, provide guidance, and support (to many teams and projects). The MBB should be in the mode of mentoring, teaching, and coaching. They should be approachable and willing and able to assist BBs when their help is needed.

Mean


The mean is the average data point value within a data set. To calculate the mean, add all of the individual data points then divide that figure by the total number of data points.

Measurement system analysis


Measurement system analysis (MSA) is a mathematical method of determining how much the variation within the measurement process contributes to overall process variability.

Median


The median is the middle point of a data set; 50% of the values are below this point, and 50% are above this point.

Metrics


Things to measure to understand quality levels.
Metric means measurement. Hence the word metric is often used in an organisation to understand the metrics of the matrix (The trade off).

Mode


The most often occurring value in the data set

Moods Median


Mood’s median test can be used to test the equality of medians from two or more populations and, like the Kruskal-Wallis Test, provides an nonparametric alternative to the one-way analysis of variance. Mood’s median test is sometimes called a median test

Multicolinearity


Multicolinearity is the degree of correlation between Xs. It is an important consideration when using multiple regression on data that has been collected without the aid of a design of experiment (DOE). A high degree of multicolinearity produces unacceptable uncertainty (large variance) in regression coefficient estimates. Specifically, the coefficients can change drastically depending on which terms are in or out of the model and also the order they are placed in the model.

Use Ridge Regression or Partial Least Squares (PLS) Regression to get around these problems if DoE is not an option.

Multiple regression


Multiple regression is a method of determining the relationship between a continuous process output (Y) and several factors (Xs).

Multi-vari chart


A multi-vari chart is a tool that graphically displays patterns of variation. It is used to identify possible Xs or families of variation, such as variation within a subgroup, between subgroups, or over time. See the tool Multi-Vari Chart.

Noise


Process input that consistently causes variation in the output measurement that is random and expected and, therefore, not controlled is called noise. Noise also is referred to as white noise, random variation, common cause variation, noncontrollable vari

Nominal


It refers to the value that you estimate in a design process that approximate your real CTQ (Y) target value based on the design element capacity. Nominals are usually referred to as point estimate and related to y-hat model.

Non-parametric


Set of tools that avoids assuming a particular distribution.

Normal Distribution


Normal distribution is the spread of information (such as product performance or demographics) where the most frequently occurring value is in the middle of the range and other probabilities tail off symmetrically in both directions. Normal distribution I

Normal probability


Used to check whether observations follow a normal distribution. P > 0.05 = data is normal

Normality test


A normality test is a statistical process used to determine if a sample or any group of data fits a standard normal distribution. A normality test can be performed mathematically or graphically.

Null Hypothesis (Ho)


A null hypothesis (H0) is a stated assumption that there is no difference in parameters (mean, variance, DPMO) for two or more populations. According to the null hypothesis, any observed difference in samples is due to chance or sampling error.

O.C.T. - Operation Cost Target


This value represents the maximum expenditure for material, labor, outsourcing, overhead, and all other costs associated with that project. This figure can then be divided between the various operations comprising the manufacturing process, in order to control costs at each step.

O.E.M.


Original Equipment Manufacturer

Opportunity


An opportunity is anything that you inspect, measure, or test on a unit that provides a chance of allowing a defect. In simple words an opportunity means ,any event that can be measured.

Any area within a product, process, service, or other system where a defect could be produced or where you fail to achieve the ideal product in the eyes of the customer. In a product, the areas where defects could be produced are the parts or connection of parts within the product. In a process, the areas are the value added process steps. If the process step is not value added, such as an inspection step, then it is not considered an opportunity.

Optimization


Adjusting the system or process inputs to produce the best possible average response with minimum variability.

OSHA


Occupational Safety and Health Administration.

Outlier


An outlier is a data point that is located far from the rest of the data. Given a mean and standard deviation, a statistical distribution expects data points to fall within a specific range. Those that do not are called outliers and should be investigated

Output


The result of a process. The deliverables of the process; such as products, services, processes, plans, and resources.

P Value


The probability value (p-value) of a statistical hypothesis test is the probability of getting a value of the test statistic as extreme as or more extreme than that observed by chance alone, if the null hypothesis Ho, is true.

It is the probability of wrongly rejecting the null hypothesis if it is in fact true.

It is equal to the significance level of the test for which we would only just reject the null hypothesis. The p-value is compared with the desired significance level of our test and, if it is smaller, the result is significant. That is, if the null hypothesis were to be rejected at the 5% significance level, this would be reported as "p < 0.05".

Small p-values suggest that the null hypothesis is unlikely to be true. The smaller it is, the more convincing the evidence is that null hypothesis is false. It indicates the strength of evidence for say, rejecting the null hypothesis H0, rather than simply concluding "Reject Ho" or "Do not reject Ho".

Pareto


A bar chart that displays by frequency, in descending order, the most important defects.

Passion for Action – PFA


Passion for Action is the outward expression of Highly Motivated Professionals Dedicated to the Improvement of Quality in All Aspects of Service and Manufacturing Companies. PFA is a characteristic of highly Successful Companies as it permeates all activities at all levels of the business culture. An organization containing PFA will develop an Enterprise-Wide Current that continuously pulls the organization to its next performance level. The concept was coined by Organizational Change Agent Consultant Rick Carangelo.

Percent of tolerance


Percent of tolerance is calculated by taking the measurement error of interest, such as repeatability and/or reproducibility, dividing by the total tolerance range, then multiplying the result by 100 to express the result as a percentage.

Pi


Pi (TM) Perpetual Improvement is the Manufacturing Management System, designed by David Wilkerson, in which each team member continuously seeks to improve every system, process, and procedure, as well as her/his performance in the manufacturing unit. Step-by-step instructions facilitate this process.

Platykurtic Distribution


A platykurtic distribution is one in which most of the values share about the same frequency of occurrence. As a result, the curve is very flat, or plateau-like. Uniform distributions are platykurtic.

Poka Yoke


Japanese term which means mistake proofing. A poka yoke device is one that prevents incorrect parts from being made or assembled, or easily identifies a flaw or error.

To avoid (yokeru) inadvertent errors (poka).

Pooled Standard Deviation


Pooled standard deviation is the standard deviation remaining after removing the effect of special cause variation-such as geographic location or time of year. It is the average variation of your subgroups.

Population


The entire collection of items that is the focus of concern.

Population Defect Rate


The true proportion of defects in the population. This is usually estimated by a sample, rather than getting true population data. Since estimates are less than perfect, it is common to indicate how imperfect they are.

PPAP


Production Part Approval Process

Ppk

Ppk represents the actual, or overall, capability of the process.

PPM


Parts Per Million. Typically used in the context of defect Parts Per Million opportunities. Synonymous with DPMO.

Precision


Lack of variation in your measurement. Can be measured in terms of the standard deviation of your measurement system. Has nothing to do with accuracy, which is lack of bias. A precise rifle will shoot small groups. An accurate rifle is properly sighted in.

Prediction Band (or interval)


Measurement of the certainty of the scatter about a certain regression line. A 95% prediction band indicates that, in general, 95% of the points will be contained within the bands.

Probability


Probability refers to the chance of something happening, or the fraction of occurrences over a large number of trials. Probability can range from 0 (no chance) to 1 (full certainty).

Probability of Defect


Probability of defect is the statistical chance that a product or process will not meet performance specifications or lie within the defined upper and lower specification limits. It is the ratio of expected defects to the total output and is expressed as

Process


A series of steps or actions that lead to a desired result or output.

A set of common tasks that creates a product, service, process, or plan that will satisfy a customer or group of customers.

Process Capability


Process capability refers to the ability of a process to produce a defect-free product or service. Various indicators are used-some address overall performance, some address potential performance.

Process Instance


An instance of a process (e.g. the production of a specific purchase order is one instance of the purchasing process)

Process Management


Also called Business Process Quality Management or Reengineering. The concept of defining macro and micro processes, assigning ownership, and creating responsibilities of the owners.

Process Map


A visual representation of the work-flow either within a process - or an image of the whole operation. One differentiates between "30,000 feet overviews", "Medium image" or "homing in", "zooming in", "Micro Map" &c. A good Process Map should allow people unfamiliar with the process to understand the interaction of causes during the work-flow. A good Process Map should contain additional information relating to the Six Sigma project i.e. information per critical step about input and output variables, time, cost, DPU value. A program for creation of Process Maps is Microsoft Visio.

Process Performance Management


The overseeing of process instances to ensure their quality and timeliness. Can also include proactive and reactive actions to ensure a good result.

Producers Risk


Concluding something is good when it is actually bad (TYPE I Error)

Productivity Target


Each operation in the manufacturing process is assigned a Productivity Target value. This value represents the minimum number of conformant products (value-added entities) per designated period. (See also Value-Added)

Project Scope


Defined and specific project beginning and end points. The more specific the details (what's in-scope and what's out of scope, the less a project may experience "scope creep."

p-value


The p-value measures how unusual or large a statistical test result is with respect to some statement of no difference or effect. A p-value close to zero signals a difference is very likely to exist, while large p-values closer to 1 implies that there is no detectable difference for the sample size used.

More specifically, the p-value of a statistical significance test represents the probability of obtaining values of the test statistic that are equal to or greater in magnitude than the observed test statistic. To calculate a p-value, collect sample data and calculate the appropriate test statistic for the test you are performing. For example, t-statistic for testing means, Chi-Square or F statistic for testing variances etc. Using the theoretical distribution of the test statistic, find the area under the curve (for continuous variables) in the direction(s) of the alternative hypothesis using a look up table or integral calculus. In the case of discrete variables, simply add up the probabilities of events occurring in the direction(s) of the alternative hypothesis that occur at and beyond the observed test statistic value.

Q1


25th percentile (from box plot)

Q3


75th percentile (from box plot)

QFD(Quality Function Deployment)


Quality Function Deployment (QFD) is a systematic process for motivating a business to focus on its customers. It is used by cross-functional teams to identify and resolve issues involved in providing products, processes, services and strategies which will more than satisfy their customers. A prerequisite to QFD is Market Research. This is the process of understanding what the customer wants, how important these benefits are, and how well different providers of products that address these benefits are perceived to perform. This is a prerequisite to QFD because it is impossible to consistently provide products which will attract customers unless you have a very good understanding of what they want.

Qualitative data


Discrete data

Quality Assurance


A planned and systematic pattern of all actions necessary to provide adequate confidence that the product optimally fulfils customer''s expectations.

A planned and systematic set of activities to ensure that requirements are clearly established and the defined process complies to these requirements.

Quality Control


Also called statistical quality control. The managerial process during which actual process performance is evaluated and actions are taken on unusual performance.
It is a process to ensure whether a product meets predefined standards and requisite

Quality Improvement


The organized creation of beneficial changes in process performance levels.

Quality Target


Each operation in the manufacturing process, which has an effect on the conformance of the end product to the customer's specifications, is assigned a Quality Target value. This value represents the maximum allowable discrepancies per 1,000 opportunities. (See also Opportunity)

Quantitative data


Continuous data

Radar Chart


A radar chart is a graphical display of the differences between actual and ideal performance. It is useful for defining performance and identifying strengths and weaknesses.

Random Sample


A data point taken at random from the universe or population of your process.

Randomization


Running experiments in a random order, not the standard order in the test layout. Helps to eliminate effect of "lurking variables", uncontrolled factors whihc might vary over the length of the experiment.

Range


The difference or interval between the smallest (or lowest) and largest (or highest) values in a frequency distribution.

Rational Subgroup


A rational subgroup is a subset of data defined by a specific factor such as a stratifying factor or a time period. Rational subgrouping identifies and separates special cause variation (variation between subgroups caused by specific, identifiable factors

Reengineering


Also called Business Process Quality Management or Process Management. The concept of defining macro and micro processes, assigning ownership, and creating responsibilities of the owners.

Regression


The relationship between the mean value of a random variable and the corresponding values of one or more independent variables.

Regression analysis


Regression analysis is a method of analysis that enables you to quantify the relationship between two or more variables (X) and (Y) by fitting a line or plane through all the points such that they are evenly distributed about the line or plane. Visually,

Repeatability


Repeatability is the variation in measurements obtained when one person takes multiple measurements using the same techniques on the same parts or items.

Replicates


Number of times you ran each corner. Ex. 2 replicates means you ran one corner twice.

Replication


Replication occurs when an experimental treatment is set up and conducted more than once. If you collect two data points at each treatment, you have two replications. In general, plan on making between two and five replications for each treatment. Replica

Reproducibility


Reproducibility is the variation in average measurements obtained when two or more people measure the same parts or items using the same measuring technique.

Residual


A residual is the difference between the actual Y output value and the Y output value predicted by the regression equation. The residuals in a regression model can be analyzed to reveal inadequacies in the model. Also called "errors"

Resolution


Resolution is a measure of the degree of confounding among effects. Roman numerals are used to denote resolution. The resolution of your design defines the amount of information that can be provided by the design of experiment. As with a computer screen,

Response


A reaction, as that of an organism or a mechanism, to a specific stimulus.

Robust


Insensitivity of a process output to the variation of the process inputs.

Robust Process


A robust process is one that is operating at 6 sigma and is therefore resistant to defects. Robust processes exhibit very good short-term process capability (high short-term Z values) and a small Z shift value.

Robustness


The characteristic of the process output or response to be insensitive to the variation of the inputs. Setting the process targets using the process interactions increases the likelihood of the process exhibiting robustness.

Rolled Throughput Yield – RTY


Rolled Throughput Yield (RTY) is the probability that a single unit can pass through a series of process steps free of defects.

Next we will turn our attention to a Rolled Throughput Yield example. First Time Yield (FTY) considereS only what went into a process step and what went out. Rolled Throughput Yield adds the consideration of rework. Using the previous example:

Process A = 100 units in and 90 out Process B = 90 in and 80 out Process C = 80 in and 75 out Process D = 75 in and 70 out.

If in order to get the yield out of each step we had to do some rework (which we probably did) then it really looks more like this:

Process A = 100 units, 10 scrapped and 5 reworked to get the 90. The calculation becomes 100-(10+5)/100 = 85/100 = .85 This is the true yield when you consider rework and scrap.

Process B = 90 units in, 10 scrapped and 7 reworked to get the 80. 90-(10+7)/90 = .81

Process C = 80 units in, 5 scrapped and 3 reworked to get the 75. 80-(5+3)/80 = .9

Process D = 75 units in, 5 scrapped and 10 reworked to get the 70. 75-(5+10)/75 = .8

Now to get the true Rolled Throughput Yield (Considering both scrap and the rework necessary to attain what we thought was first time throughput yield) we find that the true yield has gone down significantly:

.85*.81*.9*.8 = .49572 or Rounded to the nearest digit, 50% yield. A substantially worse and substantially truer measurement of the process capability.

Root Cause


A factor that caused a non-conformance and should be permanently eliminated through process improvement.

RQL


RQL - Rejectable Quality Level: generic term for the incoming quality level for which there is a low probability of accepting the lot. The quality level is substandard.

R-squared


A mathematical term describing how much variation is being explained by the X. FORMULA: R-sq = SS(regression) / SS(total)

R-squared (adj)


Unlike R-squared, R-squared adjusted takes into account the number of X's and the number of data points. FORMULA: R-sq (adj) = 1 - [(SS(regression)/DF(regression)) / (SS(total)/DF(total))]

Takes into account the number of X's and the number of data points...also answers: how much of total variation is explained by X.

Run Chart


A performance measure of a process over a specified period of time used to identify trends or patters.

S.M.A.R.T.


Specific, Measurable, Attainable, Relevant and Time-bound.

Sample


A Sample is a portion of the whole collection of items (population).
An estimate of a larger group of people or items; also called a subgroup

A portion or subset of units taken from the population whose characteristics that would be used for analysis are considered to be identical with a notion that any unit can represent the population.

Sample Sign Test


Tests the probability of sample median being equal to hypothesized value.

Sample Size Calc.


The sample size calculator is a spreadsheet tool used to determine the number of data points, or sample size, needed to estimate the properties of a population.

Sampling


Sampling is the practice of gathering a subset of the total data available from a process or a population

Scatter plot


A scatter plot, also called a scatter diagram or a scattergram, is a basic graphic tool that illustrates the relationship between two variables. The dots on the scatter plot represent data points.

Scorecard


A scorecard is an evaluation device, usually in the form of a questionnaire, that specifies the criteria your customers will use to rate your business's performance in satisfying their requirements.

Screening


An inspection step in the process, designed to distinguish between good and bad products. It utilizes an attribute measuring method.

Screening DOE


A screening design of experiment (DOE) is a specific type of a fractional factorial DOE. A screening design is a resolution III design, which minimizes the number of runs required in an experiment.

Segmentation


Segmentation is a process used to divide a large group into smaller, logical categories for analysis. Some commonly segmented entities are customers, data sets, or markets

Segmentation involves dividing process data into segments. For example, you may collect the cause of defects of a process and place the data into a pareto chart. The pareto chart then displays the segmentation...type A defects are 50%, type B defects are 30% and type C defects are 10%. These are possible ways to segment the data.

S-hat Model


It describes the relationship between output variance and input nominals

Ship Date


Ship Date is the latest date an order can depart the manufacturing facility

Sigma


The Greek letter s (sigma) refers to the standard deviation of a population. Sigma, or standard deviation, is used as a scaling factor to convert upper and lower specification limits to Z. Therefore, a process with three standard deviations between its me

Sigma Level


Determining sigma levels of processes (one sigma, six sigma, etc.) allows process performance to be compared throughout an entire organization, because it is independent of the process. It is merely a determination of opportunities and defects, however the terms are appropriately defined for that specific process.

Sigma is a statistical term that measures how much a process varies from perfection, based on the number of defects per million units.

One Sigma = 690,000 per million units
Two Sigma = 308,000 per million units
Three Sigma = 66,800 per million units
Four Sigma = 6,210 per million units
Five Sigma = 230 per million units
Six Sigma = 3.4 per million units

In formulae for control limits and process capabilities, sigma is the symbol for Standard Deviation, calculated from the squares of the deviations of measured samples from the mean value (or sometimes by other methods using 'magic' numbers). For a normally distributed output, 99.7% would be expected to fall between +/-(3 x sigma) levels.

SIPOC


SIPOC stands for suppliers, inputs, process, output, and customers. You obtain inputs from suppliers, add value through your process, and provide an output that meets or exceeds your customer's requirements.

Supplier-Input-Process-Output-Customer: Method that helps you not to forget something when mapping processes.

Six Sigma


Six Sigma is a methodology that provides businesses with the tools to improve the capability of their business processes. This increase in performance and decrease in process variation leads to defect reduction and vast improvement in profits, employee morale and quality of product.

Skewness


Most often, the median is used as a measure of central tendency when data sets are skewed. The metric that indicates the degree of asymmetry is called, simply, skewness. Skewness often results in situations when a natural boundary is present.

Span


A measure of variation for "S-shaped" fulfilment Y's

Special Cause Variation


Unlike common cause variability, special cause variation is caused by known factors that result in a non-random distribution of output. Also referred to as "exceptional" or "assignable" variation. Example: Few X's with big impact.

Special cause variation is a shift in output caused by a specific factor such as environmental conditions or process input parameters. It can be accounted for directly and potentially removed and is a measure of process control.

Specification


Customer's expectation for product or service deliverable/output.

Spread


The spread of a process represents how far data points are distributed away from the mean, or center. Standard deviation is a measure of spread.

SS Process Report

The Six Sigma process report is a Minitab™ tool that calculates process capability and provides visuals of process performance.

SS Product Report


The Six Sigma product report is a Minitab™ tool that calculates the DPMO and short-term capability of your process.

Stability


Stability represents variation due to elapsed time. It is the difference between an individual's measurements taken of the same parts after an extended period of time using the same techniques.

Stable Process


A process that does not contain any special cause variation -- it only contains common cause variation. Common cause variation is that which is normal to the process and doesn't change over time.

Stakeholder


People who will be affected by the project or can influence it but who are not directly involved with doing the project work. Examples are Managers affected by the project, Process Owners, People who work with the process under study, Internal departments that support the process, customers, suppliers, and financial department.

Standard Deviation


A statistic used to measure the variation in a distribution. Sample standard deviation is equal to the square root of (the sum of the squared deviations of the mean divided by the sample size minus 1). Where the whole population is known, the minus 1 "fudge factor" should be omitted.

The Standard Deviation is the square root of (the sum of the squared deviations from the mean, divided by the sample size minus one).
In formulae it is often represented by the letters SD or the symbol (Greek letter) sigma.
Although it is closely related to, and used in calculations for, the Sigma level of a process you need to be careful to distinguish the two meanings.

Standard Deviation (s)

Standard deviation is a measure of the spread of data in relation to the mean. It is the most common measure of the variability of a set of data. If the standard deviation is based on a sampling, it is referred to as "s.

Standard Order


Design of experiment (DOE) treatments often are presented in a standard order. In a standard order, the first factor alternates between the low and high setting for each treatment. The second factor alternates between low and high settings every two treat

Statistic


Any number calculated from sample data, describes a sample characteristic


A numerical value, such as standard deviation or mean, that characterizes the sample or population from which it was derived.

Statistical Process Control (SPC)


Statistical process control is the application of statistical methods to analyze and control the variation of a process.

Statistics


The mathematics of the collection, organization, and interpretation of numerical data, especially the analysis of population characteristics by inference from sampling.

Stratification


A stratifying factor, also referred to as stratification or a stratifier, is a factor that can be used to separate data into subgroups. This is done to investigate whether that factor is a significant special cause factor.


Stratification involves looking at process data and splitting it into distinct layers (almost like rock is stratified). By looking at the stratified data, you can then possible see different processes.

For instance, you may process loans at your company. Once you stratify by loan size (e.g. less than 10 million, greater than 10 million), you may see that the central tendency metrics are completely different which would indicate that you have two entirely different processes...maybe only one of the processes is broken.

Stratification is related to, but different from, Segmentation.

Sub-Group


A distinct group within a group; a subdivision or subset of a group.

System Audit


System Audit - Also called Process Audit: can be conducted for any activity. Usually made against a specific document such as operating procedure, work instruction, training manual, etc.

Taguchi Method


A technique for designing and performing experiments to investigate processes where the output depends on many factors (variables; inputs) without having to tediously and uneconomically run the process using all possible combinations of values of those variables. By systematically choosing certain combinations of variables it is possible to separate their individual effects.

Team Leader


Each work cell is supervised by a Team Leader, who is responsible for maintaining optimal quality and productivity. Generally, this is a top-level technician who also is a natural leader.

Throughput


Output or production, as of a computer program, over a period of time.

Tolerance Range


Tolerance range is the difference between the upper specification limit and the lower specification limit

Total Observed Variation


Total observed variation is the combined variation from all sources, including the process and the measurement system.

Total Prob of Defect


The total probability of defect is equal to the sum of the probability of defect above the upper spec limit-p(d), upper-and the probability of defect below the lower spec limit-p(d), lower.

Total Quality Management


A short label for the list of prerequisites for achieving world-class quality. Use began in the last half of the twentieth century. Although there is no agreement on what were the essential elements of TQM, many use the criteria of the Malcolm Baldrige National Quality Award.

Transfer function


A transfer function describes the relationship between lower level requirements and higher level requirements. If it describes the relationship between the nominal values, then it is called a y-hat model. If it describes the relationship between the variables

Transformations
Used to make non-normal data look more normal.

Trimmed Mean


Compromise between the mean and median. The Trimmed Mean is calculated by eliminating a specific percentage of the smallest and largest observation from the data set and then calculating the average of the remaining observation. It is useful for data with potential extreme values.

Trivial many


The trivial many refers to the variables that are least likely responsible for variation in a process, product, or service.

TRIZ


TRIZ is a methodology employed in Design for Six-Sigma (DFSS) for identifying design alternatives.

TRIZ (pronounced "TREEZ", the Russian acronym for the Theory of Inventive Problem Solving) is a unique knowledge-based technology for generating new concepts. The power of TRIZ is based on the understanding of the evolution of successful products, ways to overcome psychological barriers and generalization of the ways used to solve problems in the most innovative inventions. TRIZ involves a systematic analysis of a problem to be solved and the application of a series of guidelines for the generation of solution alternatives.

T-tes

A t-test is a statistical tool used to determine whether a significant difference exists between the means of two distributions or the mean of one distribution and a target value.


The t test employs the statistic (t), with n-1 degrees of freedom, to test a given statistical hypothesis about a population parameter. Usually used with small sample sizes (<30),

Tukey's (1-wayANOVA):


Check to obtain confidence intervals for all pairwise differences between level means using Tukey's method (also called Tukey's HSD or Tukey-Kramer method). Specify a family error rate between 0.5 and 0.001

Type I Error


In hypothesis testing: rejecting the null hypothesis (no difference) when it is in fact true (e.g. convicting an innocent person.)

TYPE 1 errors are those where scientists assumed a relationship where none existed.

Type II Error


In hypothesis testing: failing to reject a false null hypothesis (e.g., failing to convict a guilty person).

TYPE 2 errors are those where scientists assumed no relationship exists when in fact it does.

U Chart
A chart displaying the counts per unit.

UCL


Upper Control Limit (note, different from USL): representing a 3 x sigma upwards deviation from the mean value of a variable (see also LCL). For normally distributed output, 99.7% should fall between UCL and LCL.

When used on control charts, the "3sigma" level can be calculated from sample-to-sample values or batch-to-batch averages using a "magic number", and is used to flag-up unexpected deviations.

Unbiased Statistic


A statistic is an unbiased estimate of a given parameter when the mean of the sampling distribution of that statistic can be shown to be equal to the parameter being estimated.

Unexplained Variation (S)


Regression statistical output that shows the unexplained variation in the data.

Unit


A unit is any item that is produced or processed.

Univariate


A random variable with a numerical value that is defined on a given sample space.

USL


An upper specification limit, also known as an upper spec limit, or USL, is a value below which performance of a product or process is acceptable

Value


Value=Function/cost
Value is the exchange for which customer pays.

Value can also equate to quality over Cost, ie the higher the quality the lower should be the cost in the real context. It can also construed being that higher the quality is the higher the cost of implementing it, but this is not the kind we talk about. No doubt that some times it takes higher cost to improve quality but in the long term cost like hidden, opportunity cost will be reduced perpetually

Value-Added


Value-Added refers to the development, processing, or modification of an item, which increases its revenue value to the manufacturing firm.

Variable (Data)


Variable data is what you would call Quantitative. There are two types (Discrete)count data and (Continuous) data.

Variance


The square of the standard deviation.

The deviation from what was expected.

Deviation from process mean ie, away from the target which often results in extra cost to revert back on target/mean.

Variation


Variation is the fluctuation in process output. It is quantified by standard deviation, a measure of the average spread of the data around the mean. Variation is sometimes called noise. Variance is squared standard deviation.

Variation (Common Cause)


Common cause variation is fluctuation caused by unknown factors resulting in a steady but random distribution of output around the average of the data. It is a measure of the process potential, or how well the process can perform when special cause variation removed.

Common cause variability is a source of variation caused by unknown factors that result in a steady but random distribution of output around the average of the data. Common cause variation is a measure of the process's potential, or how well the process can perform when special cause variation is removed. Therefore, it is a measure of the process technology. Common cause variation is also called random variation, noise, non- controllable variation, within-group variation, or inherent variation. Example: Many X's with a small impact.

Variation (Special Cause)


Unlike common cause variability, special cause variation is caused by known factors that result in a non-random distribution of output. Also referred to as "exceptional" or "assignable" variation. Example: Few X's with big impact.

Special cause variation is a shift in output caused by a specific factor such as environmental conditions or process input parameters. It can be accounted for directly and potentially removed and is a measure of process control.

Vital Few


Derived from the pareto chart, the term indicates that many defects come from relatively few causes (the 80/20 rule).

Sometime we may view vital few in a positive term ie, for example, 20% of student make up the country good average result in a typical exam, or 20% of the peopel of the country really make up the wealth of the country.

Web Chart


Graphical Visual Management Tool that Diplays Multiple Measurables in a Spider "Web" like Chart Allowing Quick Analysis Among and Comparisons Between Data Streams. A Highly Effective World Class Manufacturing Visual Control Tool used in Environments such as Lean Manufacturing, Kaizen, and others. Published in the American Society for Quality in the early Nineties.

Whisker


From box plot...displays minimum and maximum observations within 1.5 IQR (75th-25th percentile span) from either 25th or 75th percentile. Outlier are those that fall outside of the 1.5 range.

White Noise
Common cause variation in a process

Work Cell


A logical and productive grouping of machinery, tooling, and personnel which produces a family of similar products. Each cell has a leader who manages the work flow, and is responsible for maintaining optimal quality and productivity. A key element in the Pi (TM) Perpetual Improvement system.

X Bar


Also known as the sample mean. See Mean.

X-Bar and R Charts
X-Bar and R Charts: This set of two charts is the most commonly used statistical process control procedure. Used to monitor process behavior and outcome overtime. Provides very good measures and allows the process owner to take decisions and to control the process.

Yellow Belt – YB


Sometimes referred to as Green Belts (GB) -- varies from business to business. A Yellow Belt typically has a basic knowledge of Six Sigma, but does not lead projects on their own, as does a Green Belt or Black Belt.

Yield


Yield is the percentage of a process that is free of defects.

OR

Yield is defined as a percentage of met commitments(total of defect free events) over the total number of opportunities.

Z Value


A Z value is a data point's position between the mean and another location as measured by the number of standard deviations. Z is a universal measurement because it can be applied to any unit of measure. Z is a measure of process capability and correspond

Z bench


Z bench is the Z value that corresponds to the total probability of a defect.

Z LT


Z long term (ZLT) is the Z bench calculated from the overall standard deviation and the average output of the current process. Used with continuous data, ZLT represents the overall process capability and can be used to determine the probability of making out-of-spec parts within the current process.

Z Score


A measure of the distance in standard deviations of a sample from the mean.

Z ST

ZST represents the process capability when special factors are removed and the process is properly centered. ZST is the metric by which processes are compared.

Why's

The 5 why's typically refers to the practice of asking why the failure has occurred five times in order to get to the root cause of the problem. No special technique is required.

An example is in order:

You are on your way home from work and your car stops:

Why did your car stop? Because it ran out of gas.

Why did it run out of gas? Because I didn't buy any gas on my way to work.

Why didn't you buy any gas this morning? Because I didn't have any money.

Why didn't you have any money? Because I lost it all last night in a poker game.

Failure to determine the root cause assures that you will be treating the symptoms of the problem instead of its cause, in which case, the disease will return, that is, you will continue to have the same problems over and over again.

Also note that the actual numbers of why's is not important as long as you get to the root cause. One might well ask why did you loose all your money in the poker game last night?

6 Ms

Machines
Methods
Materials
Measurements
Mother Nature (Environment)
Manpower (People)

8 D Process

The 8D Process is a problem solving method for product and process improvement. It is structured into 8 steps (the D's) and emphasizes team. This is often required in automotive. The 8 basic steps are: Define the problem and prepare for process improvement, establish a team, describe the problem, develop interim containment, define & verify root cause, choose permanent corrective action, implement corrective action, prevent recurrence, recognize and reward the contributors.

Of course, different companies have their different twists on what they call the steps, etc...but that is the basics.

No comments:

Post a Comment

Designer: Douglas Bowman | Dimodifikasi oleh Abdul Munir Original Posting Rounders 3 Column