You are on page 1of 5

feature

Quality Control Tools for Managing FEA in Product Design


a Forward Thinking Approach Part II

he first part of this article appeared in the April 2005 edition of BENCHmark, and discussed the need for Quality Assurance programmes as well as the component parts necessary in such programmes, . Here, part two will discuss each component in more detail along with some background on the driving factors as well as alternate options, both NAFEMS and otherwise.

by the users who responded. Management issues ranged from active obstruction because of misconceived ideas about the value, or lack thereof, which analysis provides to a passive hindrance by simply not allotting time for training, learning curves, or investigative studies. Many managers passively obstruct analysis success by refusing to provide adequate compute resources, software tools, or priority to a budding analysis team. Many managers simply lack an understanding of how analysis can impact the companys goals and simply see it as a fire-extinguisher. Helping managers set proper expectations regarding the capabilities and limitations of analysis is often the single most important step in improving the quality and value of simulation at a company. Management training should also include a discussion of the validity of assumptions & results, quality control concepts, and an overview of the skills that they should expect their team members to possess to be productive with FEA. Management training can also take managers through a virtual project where they are put in the role of the analyst to let them make modeling decisions so that they can experience first-hand the sensitivity of data to those decisions. Upon completion of this training, engineering managers and project leaders should be able to interact with their team on a higher level while better understanding the opportunities the technology has to offer.

Process Audit
The first step in the establishment of a QA program should be to document existing processes and goals - including the technical, organizational, and competitive goals. As the program should serve the company, developing an understanding of how products are developed, what have been the historical issues and challenges, what interactions exist and which ones should be grown, and how simulation technologies can best impact a companys bottom line should precede any recommendations. A process audit should evaluate not only the tools used by an engineering department but also identify opportunities to utilize additional state-of-the-art tools that can impact a different aspect of the design process or allow simulation activities to grow beyond current limitations. A process audit should help ensure that all groups, which are involved in the design process, are on the same page. At times, these audits highlight immediate opportunities for collaboration or synergies that were previously unobserved. Finally, the process audit should put some monetary values to the typical tasks in a product development department so that potential savings and opportunities for gains can be more readily identified. The report that is generated from the process audit should be a living document that allows periodic review of critical components and observations.

User Skill Level Assessment


Skill level assessment may be the most difficult and most controversial component of a QA program but have the most far-reaching impact. Skill level assessment isnt technically difficult as there are several areas of expertise that are fundamental to the successful use of analysis in the design process. The difficulty lies in the potential for perceived threat that was discussed in Part I of the article.

...management...was the greatest barrier to success of FEA in product design...


Management Education
A survey posted prior to the 2001 NAFEMS World Conference indicated that management, for various reasons, was the greatest barrier to success of FEA in product design

July 2005

17

feature

If a user is concerned that his job or reputation may be on the line by cooperating with a skills assessment, he may put up serious resistance that could undermine the entire QA program. IMPACT has had great success with skills assessment for various CAD tools as a means of qualifying fulltime and contract candidates and has learned that the positioning of the program has as much to do with the level of acceptance as the content. Users must be shown that the program is not a test but merely a tool to help them understand their skills and needs better. Our work with CAD users has shown that engineers who are open to the assessment program tend to be more open to new opportunities and better team players.

Basic Engineering Mechanics


It is understood that an analyst charged with evaluating failure and interpreting results must have a solid grasp of failure theory, stress concepts, material properties, and various mechanics principles to be effective. Unfortunately, these are the exact skills that we see lacking in analysts at all levels of use. Recently, I had the opportunity to sit in on a training class for regional technical leaders of a popular software tool. The instructor was recognized as one of their top people as well. However, when he espoused Von Mises stress as the only stress data needed for all analyses, I felt compelled to bring up ductile vs. brittle behavior and was told that fracture mechanics was beyond the scope of the course. The instructor was unclear on the applicability of Von Mises stress to predict brittle failure and resisted attempts to clear the issue up. Engineering mechanics is the cornerstone of all predictive analysis. Consequently, the topic is the second chapter in our book, Building Better Products with Finite Element Analysis [5] (It would have been the first chapter but the publisher thought some introduction was needed!) and it is the first class we recommend when developing a multi-course training program for clients. The structure of an engineering mechanics review might look something like this: Material Properties Multiple Choice & True/False Failure Theory Multiple Choice & True/False Free-Body Diagrams Multiple Choice & Work from Scratch Basic Hand Calculations Choice of Appropriate Equations and Knowledge of Terms and Techniques

...FEA is not a spectator sport.


Conversely, engineers that resist the assessment or are critical of it before even going thru it will likely be a problem to their eventual employers. Recently, we sponsored a workshop to discuss the development of a skills assessment program that would be acceptable and valuable to attendees and their organizations. The excellent turnout validated the concept and the conclusions and recommendations provided the framework for a solid approach. Interestingly, the nearly unanimous opinion of the 40 person group was that a combination test and hands-on assessment was critical to a fair evaluation of a candidates skills. The resulting program developed both from the groups input and our subsequent interaction with users is comprised of 4 sections: Basic Engineering Mechanics FE Knowledge Hands-On Problems Portfolio Review Additionally, the group felt that interactive involvement of the evaluator was important and that utilizing an outside firm would be the best means to remove potential political or other internal ill-will. It was decided that the program should be generic yet flexible to account for company-specific analysis needs and be primarily valid for determining responsibilities within that organization. The format for such a program is outlined here.

Finite Element Knowledge


Another key segment of knowledge that is mandatory for users of FE is basic knowledge of terms, concepts, capabilities, and limitations. Weve worked with engineers that have great mechanics and practical engineering ability who have developed implausible FE scenarios, grossly inappropriate meshes, and boundary condition schemes that totally redirect the attention from true problem areas. Consequently, a portion of an FE skills assessment program should explore a candidates knowledge of core FE fundamentals. Such a program might contain the following two sections:

18

July 2005

feature

Terminology Structured True/False or Multiple Choice that requires users to prove knowledge of usage, not just definitions.

FEA Process Multiple Choice, True/False, and Developed Problems that cover these areas of the process: Mesh Quality and Validity Boundary Conditions Units Model Validation

care was used in the undertaking of said projects. This would be the most subjective part of the assessment so the evaluator should attempt to correlate the portfolio discussion with observations from the previous sections. For example, a candidate who can confidently describe a complex analysis of a jointed assembly for which he/she has no access to the model data but failed to develop boundary conditions or a converged mesh on a more simple problem will probably be viewed with some skepticism. Conversely, a candidate may not navigate an evaluation problem confidently with someone watching over their shoulder but may enthusiastically demonstrate diligence and competence when showing details of a program that they have just completed. In this case, the portfolio review may allow the evaluator to look at their performance in the more structured sections in different light.

Hands-On Sample Problems


A colleague is fond of saying that FEA is not a spectator sport. Use and application of the tools of the trade are extremely important to the success of an analyst so a portion of the skills assessment should evaluate how a user handles himself/herself on the analysis tools that are to be part of their job. While the assessment program itself should be code-independent, a user has a reduced value at a company if he/she cant operate the software that the company has chosen for their analysis challenges. The hands-on problems should be chosen to fit within all popular FEA programs, allow completion within a pre-processor, unless a company determines that CAD interoperability skills are critical to their needs, and be scaled to allow completion in a reasonable amount of time if a user understands key FE concepts. The suite of problems should include general, all-purpose industry and be designed to explore a users problem solving abilities. Additionally, it should include problems that might be more specific to the type of analyses commonly performed at that particular organization. An optimal combination might be a mandatory set of problems of the first variety and a user or company selectable set of problems that probes more specific skill sets. The candidate should be able to use the tool of their choice or that specified by the company and should perform these tools in front of the evaluator. The evaluator should be looking for clearly defined markers indicating that the candidate investigates uncertainties and ambiguities built into the problem. The construction of assumptions, boundary conditions, and the interpretation of results will all provide indicators of a users level of competence.

"...minor errors in data entry and interpretation can cause major problems in the decisions based off of FE data."
The assessment report provided back to the management of the company should include not only performance results but also a plan for improvement for that individual. Simply providing a Go - No Go recommendation minimizes the value of the assessment. The evaluator should be able to recommend areas, courses, problems, or other support options to allow that candidate to move to the next level of usage within the company. Additionally, the report should detail education background, subsequent training details, special skills that might be shared with others in the organization as well as suggestions for frequency of re-evaluation or skills review. At the beginning of the learning curve, a user should be monitored more regularly than a user who has exhibited stable and competent skills. A program can only provide such valuable detail with an involved evaluator. Finally, some sort of indication of a users level of competence and/or approved responsibility level should be noted. This level indication should be unambiguous and without negative connotations that could distill destructive emotions or behaviors. The following competence indicators are potential candidates: Qualified to Mesh Models (Subcategories may exist for types of models) Qualified to Set-Up Boundary Conditions Qualified to Interpret Results Qualified to Mentor Others

Portfolio Review
The last section of the skills assessment should include a review of past work performed by the candidate. This can include reports, screen shots, on-screen model reviews and discussion of methodologies. The evaluator will be noting both what was included and what wasnt included in the portfolio. Leeway will be granted for unavailability or confidentiality of project data from previous employers but the candidate should still be able to discuss their work in general terms to show that proper methodology and

July 2005

19

feature

The understanding with these indicators is that if a user is not deemed qualified to perform a task, they may still do so under the supervision of someone who has that qualification. This system provides two benefits to an organization. First, it should prevent erroneous analyses that can and will lead to misinformed design decisions. Secondly, it promotes on-going communication between those with more skills and those with less. Remember, as Thomas Jefferson said, he who knows best knows how little he knows.

addition to providing details of the recent work, a project report should include references to similar historical projects, test data and correlation criteria. A report should indicate the source of inputs and assumptions as well as comment on the validity of these assumptions. We have seen great success with companies that have adopted HTML based reporting accessible across a company Intranet or Internet. Many popular analysis tools have built-in HTML reporting tools that can be customized for a particular company. Additionally, a company would benefit from linking test and analysis reports, even to the point of using similar formats for the two related tasks. Often times, a company cites lack of time as the reason for poor or non-existent project reporting. This is a short-sighted excuse since access to this data can take weeks off a subsequent project if assumptions and data have to be redeveloped. Policing of report generation may be one of the more important roles of group leaders or managers.

User Education Continuous Improvement


A proactive and forward thinking quality assurance program should identify areas of growth and knowledge required to keep the skills of the users sharp and to keep the rust off. A company cant be confident that users are utilizing state-of-the-art techniques and tools unless they are exposed to people and techniques outside of their familiar surroundings. The process audit conducted at the beginning of the program should identify critical skills and techniques that are needed to maximize the benefits of simulation while the skills assessment should identify which users need work in those techniques. Employee growth should be planned, not expected to happen haphazardly. Knowledge and documentation of the next plateau for each user or group of users with clear milestones will help ensure that quality is maintained. It is also preferable to insist that all users at an organization to go through a standard set of courses so that all are using the same language and have been exposed to the same data. This doesnt preclude individual growth or exploration but ensures that the base is stable and broad.

Data Management
As companies begin to evaluate their PLM (Product Lifecycle Management) structures, the organization of analysis or other product performance data must be included in the initial phases and not just the mainstream. D.H. Brown and Associates have investigated the needs of CAE data management [1] and have found that structured PDM (Product Data Management) systems may not be up to the task. PDM systems were typically developed to manage revisions and bill hierarchies, not the simplified geometries, results formats, and validation databases required for an analysis program. One client we work with initially had such a haphazard method for managing analysis data that several members of the group had been elevated to the level of Go-To Guys because they knew whose desk drawer, hard drive, or file cabinet a particular project tape or CD would most likely reside in. While every company must develop their own PLM and data management system that best fits within their organization, a QA program for analysis must tap into that system, formalize it if need be and provide means for policing the archiving of analysis data so that a companys intellectual property and investment in simulation is secure.

Pre & Post-Analysis Checklists


NAFEMS has developed an excellent starting point for companies looking to implement checklists as a quality control tool. [4] We suggest taking these a step further and customizing them for a particular companys tools and analysis environment. As part of a total QA program, clients should be able to access these forms on-line via an Intranet or the Internet and they should be made available as part of the project documentation as described below. Weve found that even in our own work, when these simple checking tools are bypassed, minor errors in data entry and interpretation can cause major problems in the decisions based off of FE data.

Project Documentation
Too few companies have standard report formats for analysis while many companies dont mandate reports at all. Despite the obvious loss of intellectual capital a company will experience when an analyst leaves the organization without documenting his work, a company loses one of the most important quality control tools in the analysis process when reports arent completed. A QA program for analysis must include a report format that transcends groups, specializations, or departments. Analysis data on seemingly unrelated components could still provide insight and prevent repetition of work. In

Analysis Correlation Guidelines


The final section of a QA program for analysis to be discussed in this article is a documented guideline for analysis correlation. In one management training session, an attendee asked how often we saw companies correlate their FE data to test. My reply was rarely. He stated that he was surprised that so many companies trusted the FE data so much that they would forego testing. This prompted a discussion of the difference between product testing and analysis correlation. It is an unfortunate reality that many

20

July 2005

feature

companies believe that if an analysis says a part wont break and the test doesnt break the part, the analysis is correlated. In reality, the only thing this proves is that the part under test didnt break. It is slightly better correlation if a part breaks where the analysis said it should break and even better if the failure loads match reasonably. However, most experienced analysts are aware that conflicting mistakes in the setup of an analysis can result in seeming self-correction. Therefore, thought should be put into multiple validation points to ensure that boundary conditions, material properties, and geometry are all properly specified so that consistent correlation for subsequent iterations can be counted on. The analyst and the test technician should work closely together to devise a test intended to correlate the analysis modeling assumptions. Preferably, the analysis results should precede the test so that principal strain directions can guide strain gage orientations.

However you phrase it, there are some components of every analysis that are recognized as being pivotal to the validity and accuracy of resulting data. If nothing else, a QA procedure for FEA must provide a cross-check on these factors. Ideally, all users will possess all the skills required to perform competent, diligent, and efficient analysis within an organization-both from a technical and maturity standpoint. However, as the technology proliferates further and further down the design process, as it should, the likelihood of that becomes more and more remote. It falls on the management and leadership of engineering organizations to foster a controlled quality environment so that analysis can truly become a catalyst for innovation and achieve the goals we all know that it can. Remember, quality doesnt happen by accident. Only with planning, standardization, education, and diligent follow thru can a company truly feel confident that quality is assured. BM

Contact
Vince Adams, IMPACT Engineering Solutions vadams@impactengsol.com

Similarly, the model can be explored to find stressed areas that are insensitive to identified uncertainties in the model or local features so that control measurements or strain gages can validate load path or magnitude. Care should be taken to evaluate the validity of constraints in the model, especially fixed constraints as these can lead to gross variations in stiffness when comparing test results to analysis results. A QA program for analysis should bridge the gap between test and analysis and document procedures for correlating FE data. Correlation guidelines should be included in reports and deviations from expectations should always be explained or investigated.

References:
[1] BERKOOZ, G & BROWN, D. CAE Data Management: A Critical Success Factor for Virtual Product Design and Subsystem Outsourcing, D.H. Brown & Associates, 2003 [2] ADAMS, V. Is the Honeymoon Finally Over? Observations on the State of Analysis in Product Design, NAFEMS World Congress 2001 Proceedings. [3] EPLEY, N. & DUNNING, D. Feeling Holier than Thou: Are Self-Serving Assessments Produced By Errors in Self- or Social Prediction? Journal of Personality & Social Psychology, December 2000 Vol 79(6) 861-875 [4] BEATTIE, G.A. - Management of Finite Element Analysis-Guidelines to Best Practice, NAFEMS, 1995.

"A QA program for analysis should bridge the gap between test and analysis and document procedures for correlating FE data."
Summary
No two companies operate alike, even if their product lines are comparable and competitive. Consequently, no QA program can be assumed valid for all companies without running the risk of forcing an engineering organization to cater to the needs of the system. In our training programs, we identify Geometry, Properties, Mesh, & Boundary Conditions as the key assumptions in any analysis and the most likely sources of error. Lawrence Marks [6] reiterates these somewhat as Shape, Material, Mesh, Loads, and Constraints. Finally, Barna Szabo [7] expands the list to include Problem Formulation, Material Properties, Loading, Constraints, Initial State, Definition of Data of Interest (& Allowables), Assumptions & Error Discussion, and Idealization and Discretization.

[5] ADAMS, V. & ASKENAZI, A. Building Better Products with Finite Element Analysis, OnWord Press, Santa Fe, NM, 1998 [6] MARKS, L. Final Analysis A Closer Look at FEA., CAD Server, Nov 27, 2001 [7] SZABO, B. Quality Assurance in the Numerical Simulation of Mechanical Systems, Computational Mechanics for the Twenty First Century, Saxe-Colburg Publications, Edinburgh, UK, 2000

July 2005

21

You might also like