QA cheat sheet - Summary Quality Control PDF

Title QA cheat sheet - Summary Quality Control
Author Victor Vong
Course Quality Control
Institution Nanyang Technological University
Pages 2
File Size 283.1 KB
File Type PDF
Total Downloads 97
Total Views 146

Summary

Cheat Sheet of QA...


Description

Quality assurance is a way of preventing mistakes and defects in manufactured products and avoiding problems when delivering products or services to customers; which ISO 9000 defines as "part of quality management focused on providing confidence that quality requirements will be fulfilled".

Statistics: The science concerned with collection, organization, analysis, interpretation, and presentation of data

Expected Val:

Statistical Quality Control: use of 7M’s: Management, Manpower, statistical methods in the monitoring Marketing, Method, Machine, and maintaining of the quality of Material, and Money products and services. 1) Acceptance sampling 2) SPC (control charts)

7 Quality Tools 1) Check Sheet metrics, structured table or form for collecting data and analysing 2) Histogram density of data in any given distribution and understand the factors or data that repeat more often 3) Pareto Diagram highlight the most important factors that is the reason for major cause of problem or failure (List, Measure, Rank the elements, Create Cumulative Distributions, Draw & Interpret the Pareto Curve) 4) Cause & Effect Diagram (Fishbone/Ishikawa) identifying the various causes (or factors) leading to an effect (or problem) and also deriving meaningful relationship between them (People, Methods, Machines, Material, Measurements, Environment) 5) Stratification divide the data and conquer the meaningful Information to solve a problem (reduce variation, 4M’s: men, machines, material and method) 6) Scatter Diagram establish a relationship between problem (overall effect) and causes (causation subset of association) y = a + bx + e (a: additive error, b: multiplicative error, e: fluctuation) 7) Graph/Control Chart (Shewhart) determine if the process is stable and capable within current conditions.

Six Sigma:

Correlation Poisson Distribution: the discrete probability distribution of the number of events occurring in a given time period, given the average number of times the event occurs over that time period

Process Capability: defined

Quick PC estimation mathematically as the interval of 99.73% of measurements of a product characteristic under the influence of only random variations; for normally distributed measurements, this interval is six sigma - meaningful only when the process is stable, stated in absolute R: max – min measurement units, e.g. ohms, Does range, d2 from NOT involve any product table specifications

Normal Distribution: A normal distribution has a bell-shaped density curve described by its mean and standard deviation

Uses of Process Capability Study: 1) Predicting whether design tolerances can be met 2) Assigning equipment to production 3) Evaluating new equipment purchases 4) Estimating fraction defective to be expected 5) Making adjustments during manufacture 6) Setting specifications 7) Costing out contracts Process Capability Remedial Steps: 1) Center process at mid-specs 2) Review specifications 3) Apparent ~ True process capability study 4) Change variance (process) 5) 100% inspection 6) Outsource

Process Capability Index

Assumes mean is situated at the mid-point of the specification range

Cpk < 1.0 means that process is NOT capable of meeting its requirements Cpk is unable to discriminate good/bad for unilateral tolerance (Only USL or LSL) Cp shows whether specifications CAN be met Cpk shows whether specifications ARE being met X-bar and Range chart Interpreting charts Process Improvement Cycle: Continuous - Analyse (What should the process be doing, What can go wrong, What is the process doing, Achieve a state of statistical control, Determine capability - Maintain (Monitor process performance, Detect special cause variation and act upon it) Improve (Change the process to better understand common cause variation, Reduce the common cause variation)

Process Variation: Natural (inherent)

Unnatural (systematic) Random causes, Uncontrollable,  Opposite Setting/Changing limits Acceptable • Stage 1: Consists of many individual causes. E.g. Operator - Investigate outliers blunder, Any one random cause results in Remove all outliers a minute amount of variation faulty from setup;the data - Recalculate control limits a batch of and plot E.g. human variation, slight the remaining data pointsraw defective vibration in machines, slight points are in material variation in raw material - Repeat until all control. Sigma to ppm: Cannot economically be eliminated Role of Statistical Process Control (SPC): • Stage 2: from a process. a) signals that the process is operating in a routine fashion, as it was intended to - Do not recalculate control limits An observation within the control b) process behavior is predictable and its performance can be rationally assessed to unless process has significantly limits means that the process determine the extent to which itNature is meeting theX-bar expectations of the customer improved. of the and R Charts should not be adjusted. - Quickly detect the occurrence of assignable causes, so thatchart investigation andprocess excursions in the mean - Control chart limits are not normalcdf(-∞,-2,0,1) - Most sensitive (powerful) for tracking process is sufficiently stable to corrective action may be taken. and the variation constantly updated with new data. use sampling procedures -toThe goal is to improve the process - If only common causes of variability are present, the process output is constant - Assumes normal then distribution of individuals predict the quality of production - Subgroups means tend to produce normal distribution because of the central over time to narrow the control and predictable over time. or make process optimization limits. Computation of Cp only fromIFControl Central Limit Theorem (CLT): theorem Control Chart for Individual - detects processLimits is out oflimit control, not why studies - Threesituation sigma limits used Meet on spec the number Capableof(opp, subgroups. maybe) - given a sufficiently large sample size from a Measurement - What happens after an out-of-control occurs is based the core of a=successful SPC Statistical Control: A process is in statistical control when - Minimum number of subgroups of 30 required to establish control limits population with a finite level of variance, the MR=|Xi-Xi-1| program its variation pattern over time is subject to the forces of mean of all samples from the same population Where to use charts: only chance causes. It is then performing in a consistent will be approximately equal to the mean of the - Place charts only where necessary (as determined via FMEA, DOE) pattern over time, fluctuating about a fixed mean level population - Identify processes that are critical and cannot be foolproofed and a constant pattern of variation -As the sample sizeRules: increases, the sampling Western Electric Patterns on Control Charts: - If a chart has been implemented, do not hesitate to remove it if it is not value-added distribution the mean, X-bar, can be Identifying Improvement Goals - One pointof outside the 3-sigma limit - Cycling: systematic changes like temp fluctuation early investigations place charts on output variables approximated a normal with mean - In - By sub-grouping the data, we can set improvement - Gradual change: introduction of new - Two of threebyoutside thedistribution 2-sigma limit - After investigations place charts on critical input variables μ-and deviation - Short-term: center the process on the target value Fourstandard of five outside theσ/√n one-sigma limit machine/process improvements The goal: Monitor and control inputs and, over time, eliminate the need for SPC -if- Eight we repeatedly takeonindependent random - Medium-term: remove/reduce between sub-group variation consecutive one side of the center line - Mixture: underestimating the process variability or samples of size n from any population, then when charts on outputs - Long-term: remove/reduce within sub-group variation “over-control” n is large, the distribution of the sample means Control Chart Attributes: - Stratification: Points cluster around the mean. (1) (n = 5): If sub-grouping not possible (n =1)  cannot guarantee norm. Selecting subgroup will approach a normal distribution. dist Sample Size:control limits (overestimating incorrect processmeasurements at one position - 5 repeated - Variables chart: 5 if possibleimprovements variability), (2) continuous paying-off. - 5 different measurements on one sample Categories of Data: - Attributes Chart: or more Limits should be 30 recalculated - 5 measurements on five samples General Model: - E.g length: Variable, Continuous, Quantitative, Measurable - Sudden shift Rational Subgroups - E.g. Pass/Fail: Attribute, Discrete, Qualitative, Countable - Trend: component wear - Try to capture process when things are consistent - Systematic variation - Depends on critical source of variations (binomial dist.) Select subgroup so that variability due only to chance causes is captured within the Margin needed to counter component, mfg, deterioration, environmental variation. Objective: Discrete not to set wide specifications, but to reduce or control variation so specifications given by customers become 6 sigma points or further, tremendous margin to spare (1.5 sigma shift still 4 ppm) kσ quality level: process mean (average) is kσ from the nearest specification limit

subgroups (members of a subgroup are to be obtained at almost the same time) (poisson dist.)

Di = no. of defects ineachsubgroup

Design Of Experiment (DOE) Reasons for DOE: - most useful when the subject of study is complex, new, not fully understood - most efficient (in terms of data collection effort) and most effective (in terms of providing information about the characteristics of the subject of study) approach to understanding the input-output relationship of a “black box” system - DOE could reveal effects such as interactions of various orders among input factors that are otherwise not apparent – hence is a practical tool for discoveries in R&D and troubleshooting - no need to understand the underlying math Terminology: - Independent variables (input) x1, x2, … are called factors (quantitative) - Dependent variable (output) y is called response - Factors are all controllable, and response is measurable - Factors that are not controllable are noise, sum of all noise = e - A replicate is response measured from an independent run, i.e. it is not a repeated measurement from the same response

Words of caution: - Technical requirements cannot be used to set control limits. - Not applicable for high yield process (i.e. extremely low p). - Use variable charts whenever possible. - As these charts are used for monitoring undesirable outputs, they are “passive” in nature. Design Of Experiment (DOE) One-Factor-At-A-Time - Large number of observations - Cannot detect interaction - Effects of factors cannot be independently estimated - Not possible to test the significance of individual effects - Process optimization is difficult 2k Full Factorial - Multiple input factors to be manipulated, determining their effect on a desired output (response). - DOE can identify important interactions that may be missed when experimenting with one factor at a time.

np chart: For constant sample size n, d = np Six Sigma again Defects Per Million Opportunity (DPMO) = Defects observed/(Units x Possible Defects Per Unit) x 1 mil Crtitical to Quality (CTQ): internal critical quality parameters that relate to the wants and needs of the customer, define range of acceptable values, Understand target CTQ value and specification limit(s) for actual CTQ values Advantage: comparable and exchangeable measures of performance across different systems

yhat = predicted response

Blocking: lets you restrict randomization by carrying out all the trials with one setting of the factor and then all the trials with the other setting (eg. y2 – y1) Randomization: randomized sequence helps eliminate effects of unknown or uncontrolled variables. Replication: Repetition of a complete experimental treatment, including the setup.

DMAIC: Define the Customer, (CTQ) issues, and Core Business Process involved Measure performance of the Core Business Process (data collection plan for the process, defects, metrics, sources) Analyse data collected and process map to determine root causes of defects and opportunities for improvement. Improve target process by designing fixes Control the improvements to keep the process on the new course Acronyms Plan-Do-Study Act: collect data, test on small scale Average run lengths (ARL): In control chart, no. of subgroups expected to be inspected before a shift in magnitude takes place. Define, Measure, Analyze, Design and Verify Cost Of Poor Quality: internal, external, appraisal, prevention costs Normal, Independent, Identical Distribution Full Factorial Design Failure mode effects analysis (FMEA) Process performance management (PPM) Total quality management (TQM): management approach to long-term success through customer satisfaction PVC: Process (not product, go upstream), Variation, Customers 5 deadly diseases: Lack of constancy of purpose; Emphasis on short term profits; Annual grading of performance; Mobility of management; Use of visible figures only...


Similar Free PDFs