Professional Documents
Culture Documents
he first part of this article appeared in the April 2005 edition of BENCHmark, and discussed the need for Quality Assurance programmes as well as the component parts necessary in such programmes, . Here, part two will discuss each component in more detail along with some background on the driving factors as well as alternate options, both NAFEMS and otherwise.
by the users who responded. Management issues ranged from active obstruction because of misconceived ideas about the value, or lack thereof, which analysis provides to a passive hindrance by simply not allotting time for training, learning curves, or investigative studies. Many managers passively obstruct analysis success by refusing to provide adequate compute resources, software tools, or priority to a budding analysis team. Many managers simply lack an understanding of how analysis can impact the companys goals and simply see it as a fire-extinguisher. Helping managers set proper expectations regarding the capabilities and limitations of analysis is often the single most important step in improving the quality and value of simulation at a company. Management training should also include a discussion of the validity of assumptions & results, quality control concepts, and an overview of the skills that they should expect their team members to possess to be productive with FEA. Management training can also take managers through a virtual project where they are put in the role of the analyst to let them make modeling decisions so that they can experience first-hand the sensitivity of data to those decisions. Upon completion of this training, engineering managers and project leaders should be able to interact with their team on a higher level while better understanding the opportunities the technology has to offer.
Process Audit
The first step in the establishment of a QA program should be to document existing processes and goals - including the technical, organizational, and competitive goals. As the program should serve the company, developing an understanding of how products are developed, what have been the historical issues and challenges, what interactions exist and which ones should be grown, and how simulation technologies can best impact a companys bottom line should precede any recommendations. A process audit should evaluate not only the tools used by an engineering department but also identify opportunities to utilize additional state-of-the-art tools that can impact a different aspect of the design process or allow simulation activities to grow beyond current limitations. A process audit should help ensure that all groups, which are involved in the design process, are on the same page. At times, these audits highlight immediate opportunities for collaboration or synergies that were previously unobserved. Finally, the process audit should put some monetary values to the typical tasks in a product development department so that potential savings and opportunities for gains can be more readily identified. The report that is generated from the process audit should be a living document that allows periodic review of critical components and observations.
July 2005
17
feature
If a user is concerned that his job or reputation may be on the line by cooperating with a skills assessment, he may put up serious resistance that could undermine the entire QA program. IMPACT has had great success with skills assessment for various CAD tools as a means of qualifying fulltime and contract candidates and has learned that the positioning of the program has as much to do with the level of acceptance as the content. Users must be shown that the program is not a test but merely a tool to help them understand their skills and needs better. Our work with CAD users has shown that engineers who are open to the assessment program tend to be more open to new opportunities and better team players.
18
July 2005
feature
Terminology Structured True/False or Multiple Choice that requires users to prove knowledge of usage, not just definitions.
FEA Process Multiple Choice, True/False, and Developed Problems that cover these areas of the process: Mesh Quality and Validity Boundary Conditions Units Model Validation
care was used in the undertaking of said projects. This would be the most subjective part of the assessment so the evaluator should attempt to correlate the portfolio discussion with observations from the previous sections. For example, a candidate who can confidently describe a complex analysis of a jointed assembly for which he/she has no access to the model data but failed to develop boundary conditions or a converged mesh on a more simple problem will probably be viewed with some skepticism. Conversely, a candidate may not navigate an evaluation problem confidently with someone watching over their shoulder but may enthusiastically demonstrate diligence and competence when showing details of a program that they have just completed. In this case, the portfolio review may allow the evaluator to look at their performance in the more structured sections in different light.
"...minor errors in data entry and interpretation can cause major problems in the decisions based off of FE data."
The assessment report provided back to the management of the company should include not only performance results but also a plan for improvement for that individual. Simply providing a Go - No Go recommendation minimizes the value of the assessment. The evaluator should be able to recommend areas, courses, problems, or other support options to allow that candidate to move to the next level of usage within the company. Additionally, the report should detail education background, subsequent training details, special skills that might be shared with others in the organization as well as suggestions for frequency of re-evaluation or skills review. At the beginning of the learning curve, a user should be monitored more regularly than a user who has exhibited stable and competent skills. A program can only provide such valuable detail with an involved evaluator. Finally, some sort of indication of a users level of competence and/or approved responsibility level should be noted. This level indication should be unambiguous and without negative connotations that could distill destructive emotions or behaviors. The following competence indicators are potential candidates: Qualified to Mesh Models (Subcategories may exist for types of models) Qualified to Set-Up Boundary Conditions Qualified to Interpret Results Qualified to Mentor Others
Portfolio Review
The last section of the skills assessment should include a review of past work performed by the candidate. This can include reports, screen shots, on-screen model reviews and discussion of methodologies. The evaluator will be noting both what was included and what wasnt included in the portfolio. Leeway will be granted for unavailability or confidentiality of project data from previous employers but the candidate should still be able to discuss their work in general terms to show that proper methodology and
July 2005
19
feature
The understanding with these indicators is that if a user is not deemed qualified to perform a task, they may still do so under the supervision of someone who has that qualification. This system provides two benefits to an organization. First, it should prevent erroneous analyses that can and will lead to misinformed design decisions. Secondly, it promotes on-going communication between those with more skills and those with less. Remember, as Thomas Jefferson said, he who knows best knows how little he knows.
addition to providing details of the recent work, a project report should include references to similar historical projects, test data and correlation criteria. A report should indicate the source of inputs and assumptions as well as comment on the validity of these assumptions. We have seen great success with companies that have adopted HTML based reporting accessible across a company Intranet or Internet. Many popular analysis tools have built-in HTML reporting tools that can be customized for a particular company. Additionally, a company would benefit from linking test and analysis reports, even to the point of using similar formats for the two related tasks. Often times, a company cites lack of time as the reason for poor or non-existent project reporting. This is a short-sighted excuse since access to this data can take weeks off a subsequent project if assumptions and data have to be redeveloped. Policing of report generation may be one of the more important roles of group leaders or managers.
Data Management
As companies begin to evaluate their PLM (Product Lifecycle Management) structures, the organization of analysis or other product performance data must be included in the initial phases and not just the mainstream. D.H. Brown and Associates have investigated the needs of CAE data management [1] and have found that structured PDM (Product Data Management) systems may not be up to the task. PDM systems were typically developed to manage revisions and bill hierarchies, not the simplified geometries, results formats, and validation databases required for an analysis program. One client we work with initially had such a haphazard method for managing analysis data that several members of the group had been elevated to the level of Go-To Guys because they knew whose desk drawer, hard drive, or file cabinet a particular project tape or CD would most likely reside in. While every company must develop their own PLM and data management system that best fits within their organization, a QA program for analysis must tap into that system, formalize it if need be and provide means for policing the archiving of analysis data so that a companys intellectual property and investment in simulation is secure.
Project Documentation
Too few companies have standard report formats for analysis while many companies dont mandate reports at all. Despite the obvious loss of intellectual capital a company will experience when an analyst leaves the organization without documenting his work, a company loses one of the most important quality control tools in the analysis process when reports arent completed. A QA program for analysis must include a report format that transcends groups, specializations, or departments. Analysis data on seemingly unrelated components could still provide insight and prevent repetition of work. In
20
July 2005
feature
companies believe that if an analysis says a part wont break and the test doesnt break the part, the analysis is correlated. In reality, the only thing this proves is that the part under test didnt break. It is slightly better correlation if a part breaks where the analysis said it should break and even better if the failure loads match reasonably. However, most experienced analysts are aware that conflicting mistakes in the setup of an analysis can result in seeming self-correction. Therefore, thought should be put into multiple validation points to ensure that boundary conditions, material properties, and geometry are all properly specified so that consistent correlation for subsequent iterations can be counted on. The analyst and the test technician should work closely together to devise a test intended to correlate the analysis modeling assumptions. Preferably, the analysis results should precede the test so that principal strain directions can guide strain gage orientations.
However you phrase it, there are some components of every analysis that are recognized as being pivotal to the validity and accuracy of resulting data. If nothing else, a QA procedure for FEA must provide a cross-check on these factors. Ideally, all users will possess all the skills required to perform competent, diligent, and efficient analysis within an organization-both from a technical and maturity standpoint. However, as the technology proliferates further and further down the design process, as it should, the likelihood of that becomes more and more remote. It falls on the management and leadership of engineering organizations to foster a controlled quality environment so that analysis can truly become a catalyst for innovation and achieve the goals we all know that it can. Remember, quality doesnt happen by accident. Only with planning, standardization, education, and diligent follow thru can a company truly feel confident that quality is assured. BM
Contact
Vince Adams, IMPACT Engineering Solutions vadams@impactengsol.com
Similarly, the model can be explored to find stressed areas that are insensitive to identified uncertainties in the model or local features so that control measurements or strain gages can validate load path or magnitude. Care should be taken to evaluate the validity of constraints in the model, especially fixed constraints as these can lead to gross variations in stiffness when comparing test results to analysis results. A QA program for analysis should bridge the gap between test and analysis and document procedures for correlating FE data. Correlation guidelines should be included in reports and deviations from expectations should always be explained or investigated.
References:
[1] BERKOOZ, G & BROWN, D. CAE Data Management: A Critical Success Factor for Virtual Product Design and Subsystem Outsourcing, D.H. Brown & Associates, 2003 [2] ADAMS, V. Is the Honeymoon Finally Over? Observations on the State of Analysis in Product Design, NAFEMS World Congress 2001 Proceedings. [3] EPLEY, N. & DUNNING, D. Feeling Holier than Thou: Are Self-Serving Assessments Produced By Errors in Self- or Social Prediction? Journal of Personality & Social Psychology, December 2000 Vol 79(6) 861-875 [4] BEATTIE, G.A. - Management of Finite Element Analysis-Guidelines to Best Practice, NAFEMS, 1995.
"A QA program for analysis should bridge the gap between test and analysis and document procedures for correlating FE data."
Summary
No two companies operate alike, even if their product lines are comparable and competitive. Consequently, no QA program can be assumed valid for all companies without running the risk of forcing an engineering organization to cater to the needs of the system. In our training programs, we identify Geometry, Properties, Mesh, & Boundary Conditions as the key assumptions in any analysis and the most likely sources of error. Lawrence Marks [6] reiterates these somewhat as Shape, Material, Mesh, Loads, and Constraints. Finally, Barna Szabo [7] expands the list to include Problem Formulation, Material Properties, Loading, Constraints, Initial State, Definition of Data of Interest (& Allowables), Assumptions & Error Discussion, and Idealization and Discretization.
[5] ADAMS, V. & ASKENAZI, A. Building Better Products with Finite Element Analysis, OnWord Press, Santa Fe, NM, 1998 [6] MARKS, L. Final Analysis A Closer Look at FEA., CAD Server, Nov 27, 2001 [7] SZABO, B. Quality Assurance in the Numerical Simulation of Mechanical Systems, Computational Mechanics for the Twenty First Century, Saxe-Colburg Publications, Edinburgh, UK, 2000
July 2005
21