Category "Simulation"

Siemens PLM‘s robust FEA solver NX Nastran is offered in multiple flavors. At first, it is associated with multiple graphical user interfaces, and the right choice depends on the user’s existing inventory as well as technical resources available. There are three options to explore:

  • Basic designer-friendly solution: In this bundle, basic NX Nastran capabilities are embedded in the NX CAD environment. The environment also offers stress and frequency solution wizards that provide direction to the user throughout the workflow. This solution is primarily meant for designers who wish to perform initial FEA inquiry on simple models. Advanced solver and meshing functionalities are not available.
  • Advanced solution for analysts: This solution offers more features with more complexity, so it is not meant for novice users and requires prior understanding of FEA technology. There are two separate GUIs associated with this type of NX Nastran.
  • NX CAE based solver: This is a dedicated pre/post processor for FEA modeling that has its own look and feel. It looks different from NX CAD but it is tightly coupled with NX CAD in terms of associativity – hence any updates in the CAD model are quickly updated in the FEA model as well through synchronous technology. If required, it is possible to associate this solution with Siemens Teamcenter for simulation process management.
  • FEMAP based solver: This is yet another dedicated PC based pre/post processor from Siemens with its own look and feel. FEMAP offers a CAD neutral and solver neutral FEA environment. It is tightly coupled with the NX Nastran solver but it is also possible to generate input decks for Abaqus, ANSYS, LS-Dyna, Sinda, etc.

This explains all the possible GUI offerings for NX Nastran. Now let’s have a look at what functionalities are available within the NX Nastran solver. Veteran Nastran users know very well that various physics-based solver features of Nastran are called solution sequences and each one of those is associated with a number.

  • Solution sequence 101: This is the most popular sequence of Nastran family. It primarily offers linear static functionalities to model linear materials, including directional materials such as composites for small deformation problems. Basic contact features such as GAP elements are also included. This sequence is widely used in T&M and aerospace verticals.
  • Solution sequence 103: This is yet another popular solution sequence that extracts natural frequencies of parts and assemblies. Multiple algorithms are available for frequency extraction such as AMS and Lancoz. This sequence serves as a precursor for full-blown dynamics analysis in Nastran.
  • Solution sequence 105: This sequence offers linear buckling at the part and assembly level. A typical output is buckling factor as well as buckling eigen vector. The buckling factor is a single numerical value which is a measure of buckling force. Eigen vectors predicts the buckling shape of the structure.
  • Solution sequence 106: This sequence introduces basic non-linear static capabilities in the solution and Nastran 101 is a prerequisite for this sequence. It supports large deformations, metal plasticity as well as hyper elasticity. Large sliding contact is also available but it is preferable to limit the contact modeling to 2D models only; it is tedious to define contact between 3D surfaces in this sequence.
  • Solution sequences 108,109,111,112: All these solution sequences are used to model dynamic response of structure in which inertia as well as unbalanced forces and accelerations are taken into consideration. These solution sequences are very robust, which makes Nastran the first choice dynamic solver in the aerospace world. Sequences 108 and 111 are frequency-based, which means that inputs/outputs are provided in a frequency range specified by the user. The solution scheme can be either direct or modal. Sequences 109 and 112 are transient or time-based which means inputs/outputs are provided as a function of time and scheme can be either direct or modal.
  • Solution sequences 153, 159: These are thermal simulation sequences: 153 is steady state and 159 is transient. Each one of these takes thermal loads such as heat flux as inputs and provides temperature contours as outputs. They do not include fluid flow but can be used in conjunction with NX flow solver to simulate conjugate heat transfer flow problems.
  • Solution sequence 200: This is a structural optimizer that includes topology and shape optimization modules for linear models. An optimization solver is not an FEA solver, but works in parallel with the FEA solver at each optimization iteration, hence sequence 101 is a prerequisite for NX Nastran optimization. Topology and shape optimizations often have different objectives; topology optimization is primarily used in lightweight design saving material costs while shape optimization is used for stress homogenization and hot spot elimination.

Questions? Thoughts? Leave a comment and let me know.

Additive manufacturing is not a new technology – it was introduced in the manufacturing industry in late 80s for very niche applications. Stereolithography, a variant of additive manufacturing, was introduced in 1986 for rapid prototyping applications; however, its true potential remained hidden for a long time. Additive manufacturing primarily refers to methods of creating a part or a tool using a layered approach. As a still-evolving technology, it now covers a family of processes such as material extrusion, material jetting, direct energy deposition, power bed fusion, and more.

Additive manufacturing expands design possibilities by eliminating many manufacturing constraints. Contrary to rapid prototyping and 3D printing, there has been a shift of focus to functional requirements in additive manufacturing; however, these functional requirements may deviate from what is expected due to many factors typical of an additive manufacturing process.

  • Change in material properties: Mechanical and thermal properties of a manufactured part differ from raw material properties. This happens due to material phase change which is typical to most additive manufacturing applications.
  • Cracking and failure: The process itself generates lots of heat that produces residual stresses due to thermal expansion. These stresses can cause cracks in material during manufacturing.
  • Distortion: Thermal stresses can lead to distortion that can make the part unusable.

The additive manufacturing process is not certifiable yet, which is a major barrier in widespread adoption of these processes commercially. The ASTM F42 committee is working on defining AM standards with respect to materials, machines, and process variables.

The role of Simulation in additive manufacturing

  • Functional design: The first objective is to generate a suitable design that meets functional requirements, then subsequently improve the design through optimization methodologies that work in parallel with simulation.
  • Generate a lattice structure: Many of the parts manufactured through AM have a lattice structure instead of a full continuum. One objective of simulation in AM is to generate a lattice structure and optimize it using sizing optimization.
  • Calibrate material: As mentioned before, the material properties of a final part can differ substantially from that of the raw material. The next objective is to capture the phase transformation process through multi-scale material modeling.
  • Optimize the AM process: Unwanted residual stresses and distortions can develop in the process. It is necessary to accurately capture these physical changes to minimize the gap between the as-designed and as-manufactured part specs.
  • In service performance: Evaluate how the manufactured part will perform under real life service loads with respect to stiffness, fatigue, etc.


Now let’s discuss each of these objectives in more detail, with respect to SIMULIA. […]

I mentioned the process automation concept of ISight in a previous simulation automation blog. ISight is an open source code simulation automation and parametric optimization tool to create workflows that automate the repetitive process of model update and job submission with certain objectives associated with it. The objective could be achievement of an optimal design through any of the available techniques in ISight: Design of experiments, optimization, Monte Carlo simulation or Six Sigma. In this blog post, I will be discussing various value added algorithms in DOE technique; I will discuss other techniques in future blogs.

Why design of experiments

Real life engineering models are associated with multiple design variables and with multiple responses. There are two ways to evaluate the effect of change in design variable on response: Vary one at a time (VOAT) approach or Design of experiments (DOE) approach. The VOAT approach is not viable because:

  • This approach ignores interactions among design variables, averaged and non-linear effects.
  • In models associated with large FE entities, each iteration is very expensive. VOAT does not offer the option of creating high fidelity models with a manageable number of iterations.

With the DOE approach, user can study the design space efficiently, can manage multi dimension design space and can select design points intelligently vs. manual guessing. The objective of any DOE technique is to generate an experimental matrix using formal proven methods. The matrix explores design space and each technique creates a design matrix differently. There are multiple techniques which will be discussed shortly and they are classified into two broad configurations:

  • Configuration 1: User defines the number of levels and their values for each design variable. The chosen technique and number of variables determines number of experiments.
  • Configuration 2: User defines the number of experiments and design variables range.

Box-Behnken Technique

This is a three level factorial design consisting of orthogonal blocks that excludes extreme points. Box-Behnken designs are typically used to estimate the coefficients of a second-degree polynomial. The designs either meet, or approximately meet, the criterion of rotatability. Since Box-Behnken designs do not include any extreme (corner) point, these designs are particularly useful in cases where the corner points are either numerically unstable or infeasible. Box-Behnken designs are available only for three to twenty-one factors.untitled

Central Composite Design Technique […]

For many years, finite element modeling has been the job of a specialist; the tools used to perform even simple finite element analysis have been complex enough to require a subject matter expert. This is primarily due to the complex, difficult to understand graphical user interfaces of these products. The job is made further difficult to perform due to the requirement of advanced engineering subject knowledge by the analyst.

Can a mechanical designer who uses CAD tools to create engineering drawings be trained to perform engineering simulations?

In today’s product availability scenario, the answer is “yes.”

A CAD designer using CATIA can create and execute simple finite element models within the CATIA environment by using CATIA workbenches that have been developed for simulations. This makes it intuitive and easier for designers to ensure that their parts meet their design requirements.


How the simulation methodology gets simplified using designer level tools

  • No need of an expert level analyst tool to perform simple finite element simulation.
  • No need of manual data transfer between design and analysis departments.
  • No need of geometry clean up tools to fix data translation errors.

There are obvious benefits to adopting this simplified approach that integrates the design and analysis environments. The designer can predict design problem early in design process; subsequently the designer can check various alternatives of design in less time. This is primarily due to the tight integration of designer level tools with knowledge based engineering that allows the designer to deliver better product in less time.

Part Level Simulation

From a geometrical perspective, the simulation model can be generated at part level to begin with. The native integration within CATIA allows users to perform stress, displacement, and vibration analysis at any time in the design process, allowing more accurate sizing of parts and fewer design iterations. Individual parts consisting of solid, surface, and wireframe geometries can be analyzed under a variety of loading conditions. The analysis specifications, such as loads and restraints, are associative, with the design allowing users to perform analyses quickly and easily. These specifications are then automatically incorporated into the underlying finite element model, meaning that users do not have to work directly with the finite element model. “Virtual parts” allow items like forces, moments, and restraints to be easily modeled without having to have a detailed geometric representation.

Standard reports can be automatically generated in HTML format, providing clear and detailed information about the results of the analysis, including images associated with the computations. These reports can be used to document the analyses that have been performed and to communicate the results of the analysis to other stakeholders in the organization. CATIA V5 Analysis users benefit naturally from the overall PLM solution provided by Dassault Systèmes, including ENOVIA V5 for data and product lifecycle management. CATIA V5 Analysis users can store, manage, and version all the data associated with their product’s simulation and share the information within the extended enterprise. This unique capability allows collaboration and provides access to advanced PLM practices such as concurrent engineering and change management.


     Assembly level simulation

 If the concept of virtual parts does not hold good anymore and the complexities of various parts interacting with each other make assembly level simulation mandatory, it is possible to create analysis models for assemblies as well. The analysis of assemblies, including an accurate representation of the way the parts interact and are connected, allows for more realistic and accurate simulation. The designer does not have to make simplifying assumptions about the loading and restraints acting on an individual part. Instead the part can be analyzed within the environment that it operates with the loading automatically determined based on the way the part is connected to and interacts with surrounding parts.

The various types of connections that can be modeled include bolted connections, welded connections, pressure fitting connections, and many more. To make the job further easier for the designer, these connections can be defined using assembly level constraints that already exist in the CAT Product model. Once the design changes, the associated assembly constraints as well as corresponding FEA connections get updated, thereby creating an updated FEA model that is ready for analysis.

         Concurrent engineering made easier 

The “assembly of analysis” capability enables concurrent engineering. For example, the various parts in an assembly can be modeled and meshed separately by different users. They can either use the CATIA V5 meshing tools or import orphan meshes (meshes that don’t have any geometry associated with them) developed outside of CATIA Analysis using a variety of different modeling tools. The user responsible for analyzing the assembly can consolidate the different meshes, connect the parts, apply the loading specifications, and run the simulation. This can significantly reduce the turnaround time when analyzing large assemblies, particularly since some of the parts may have already been analyzed and therefore, the analysis models would already be available.


Extended solver capabilities

The basic level FEA solver present in the CATIA designer workbench is called the “Elfini” solver and can model only simpler physical problems such as linear materials, small deformations, small rotations and bonded contacts; real life problems can be much more complex and may necessitate the need of an advanced solver. To address such scenarios it is possible to include the well known non-linear solver Abaqus into the CATIA designer environment; it can model the effects of geometric nonlinearity, such as large displacements, and allows nonlinear materials to be included, such as the yielding of metals and nonlinear elastic materials like rubber. It also offers more advanced contact capabilities including the ability to model large relative sliding of surfaces in contact.

The Abaqus capability enables the effect of multiple steps to be analyzed, where the loading, restraints, contact conditions, etc., vary from one step to the next. This powerful technique allows complex loading sequences to be modeled. For example, a pressure vessel might be subjected to an initial bolt tightening step, followed by internal pressurization, and conclude with thermal loading.




Composites always had a well-defined place in the aerospace industry because of their properties: lightweight to make overall design lighter and toughness to make overall design bear the aero structural loads. At present, from aircraft fairing to train noses, boat hulls and wind turbines, composites offer dramatic opportunities to meet increasing cost-driven market requirements and environmental concerns. However, modeling of composites in a seamless collaborative environment has always been a challenge. This is because of multiple aspects of composites modeling such as design, simulation, and manufacturing that made it quite a tough task on a single platform.

CATIA composites workbench now offers a solution to address various aspects of composites modeling in a unified manner. The objective of this blog post is to provide information on composites workbench capabilities with respect to design, simulation, and manufacturability of composites.


There are different ways to start the preliminary design of a composite part, but the zone-based design is ideal to capture analysis constraints and predict the behavior of the part inside the design environment by importing thickness laws. The thickness laws are calculated as a result of FEA analysis. The composites part design workbench in CATIA provides easy-to-use dedicated zone creation and modification features. Zone-based modeling contributes to significant time savings with the ability to perform concurrent engineering with mating parts. The image below shows a wing panel with a grid created from ribs and spars in assembly context and thickness law for each cell mapped on the grid from a spreadsheet.


Once the grid information is ready, Composites workbench provides highly productive automatic ply generation from zone capabilities with automatic management of the ply staggering and stacking rules. The ability to quickly and automatically transition from zones to plies while keeping full associativity, allows the designer to focus on the design intent and helps dramatically reduce the number of geometrical tasks required to design the part.


To further check the viability of a design from the structural strength perspective, it is possible to perform the FEA simulation within the CATIA environment using the Elifini solver of CATIA analysis. The full associativity with composites workbench is maintained and true fiber angles are taken into account. To address the non-linear aspect of FEA, is it possible to export the plies data in the form of layup files to Abaqus CAE using the composites fiber modeler plug-in. In case design modifications are needed, it is possible to edit and modify any ply or sequence in the composites workbench and instantly export the modified layup file to simulation workbench or Abaqus CAE for validation. Thus designers and analysts can work together in collaboration during the composites development process, saving time, improving product quality, and preventing costly error. […]

The Dassault Systèmes SIMULIA portfolio releases new versions of its software products every year, and this year is no different. The first release of Abaqus 2017 is now available for download at the media download portal. SIMULIA has developed and broadcast 2017 release webinars to make users aware of new features available in the 2017 release, but those webinars are long recordings ranging from one to two hours each, which can be daunting. This blog post will provide a brief highlight of materials and explicit updates in Abaqus solver 2017. A more detailed explanation of any mentioned update, or answers to further questions, can be obtained either by listening to the webinar recordings at the SIMULIA 3DExperience user community portal, leaving a comment on this post, or contacting us.

SPH boundary conditions improvements

SPH particles located on opposite sides of a surface cannot interact with each other in the absence of boundary condition. This was not the case in previous releases; in Abaqus 2017, this is the default boundary condition setting. There are further improvements in tensile instability control to prevent instability among particles subjected to local tensile stresses. Below is an example in which there are SPH particles in two different chambers; the lower chamber particles are subjected to displacement BC while upper chamber particles are not subjected to any BC.

DEM improvements

  1. The series and parallel search algorithms for contact are unified to improve the DEM performance. The search cells are created only once.
  2. It is now possible to run DEM jobs with particle generators in parallel mode. This means more than one particle generator can be active while a DEM job is running.
  3. In previous releases, only fixed time increment scheme was available and it was difficult for the user to predict the appropriate time increment. In the 2017 release, an automatic time increment scheme has been introduced.
  4. Adhesive particle mixing is now supported. The algorithm used is called JKR adhesive inter particle contact. Both Hertz contact and friction are supported.

Material Enhancements

  1. There is some good news for users in the health care industry who design and manufacture cardiovascular stents: Super-elasticity, which was previously a part of user subroutines, is now available in the Abaqus 2017 material library. The motivation is Nitinol, a nickel titanium alloy used in cardiovascular stents because of super elasticity, shape memory effect, biocompatibility, and fatigue. The Nitinol model exhibits linear elastic Austensite behavior at lower stresses. On further loading, transformation from Austensite to Martensite occurs but behavior is still linear elastic. Beyond full transformation, Martensite exhibits elastic plastic behavior. A similar phenomenon is observed in compression loading. It is supported in Abaqus CAE.











2. A multilinear kinematic hardening model is now available in Abaqus 2017. In previous releases, this model was available as a user subroutine material called ABQ_MULTILIN_KINHARD.  Plasticity follows an array of perfectly plastic subvolumes that follow Von-Mises criteria, each with a unique yield strength. This model offers more flexibility than the linear kinematic hardening model. It is available only in Abaqus standard and intended for thermo-mechanical fatigue of metals. It is supported in Abaqus CAE.

3. The definition of damage initiation and damage evolution of cohesive elements with traction separation response has been enhanced to include rate dependent cohesive behavior. It is available only in Abaqus explicit. Non-linear damage initiation of ductile metals is now supported in Abaqus 2017. This model provides more flexibility to predict damage under arbitrary loading paths. It is available both in Abaqus standard as well as in explicit for ductile, shear and Johnson Cook material models.


4. Non-linear damage initiation of ductile metals is now supported in Abaqus 2017. This model provides more flexibility to predict damage under arbitrary loading paths. It is available both in Abaqus standard and explicit for ductile, shear and Johnson Cook material models.

5. The parallel rheological framework model now supports plane stress elements as well, in both standard as well as in explicit.

6. A new subroutine for user defined thermal expansion coefficients has been introduced. It is called VUEXPAN. This routine can be used in explicit to define thermal strain increments as a function of temperature, time, element number, state, or field variable. It is available only with Mises plasticity, Hill Plasticity and Johnson Cook model.

Usability Enhancements

1.Enhancements in distortion control: In Abaqus explicit, it is possible to convert highly compressed solid elements to linear kinematic formulation. Once that happens, the analysis does not stop even if the elements get inverted. It is activated by default when solid elements are used with crushable foam material.

2. Larger stable time increments in Abaqus explicit: In Abaqus 2017, there is an improved estimate method of element characteristic length to get larger stable time increments. It is defined in explicit step as follows:

*Dynamic, Explicit, improved DT method=YES (by default) or NO

It is further possible to invoke this method selectively in individual sets instead of global model as follows

*section control, improved DT method = YES or NO


The Dassault Systèmes SIMULIA portfolio releases new versions of its software products every year, and this year is no different. The first release of Abaqus 2017 is now available for download at the media download portal. SIMULIA has developed and broadcast 2017 release webinars to make users aware of new features available in the 2017 release, but those webinars are long recordings ranging from one to two hours each, which can be daunting. This blog post will provide a brief highlight of standard and explicit updates in the Abaqus 2017 Solver. A more detailed explanation of any mentioned update, or answers to further questions, can be obtained either by listening to the webinar recordings at the SIMULIA 3DExperience user community portal, leaving a comment on this post, or contacting us.

Updates in Abaqus Standard

Abaqus Standard 2017 has been substantially improved with respect to contact formulations. Mentioned below are the key highlights of various contact functionalities improvements.

  • Edge to surface contact has been enhanced with beams as master definition. This new approach facilitates the phenomenon of twist in beams during frictional contact.
  • Cohesive behavior in general contact.

General contact has always been useful in situations where either it becomes cumbersome to visualize and define large number of contact pairs, even by using contact wizard, or it’s not possible to predict contact interactions based on initial configuration. The general contact support now includes cohesive behavior, thereby making it possible to define contact in situations shown in figure below.Image1


Cohesive contact does not constrain rotational degree of freedoms. These DOFs should be constrained separately to avoid pivot ratio errors.

There have been few other changes in cohesive contact interactions. In the 2016 release, only first time cohesive contact was allowed by default, i.e. either a closed cohesive behavior at initial contact or an open initial contact that could convert to a close cohesive contact only once. In the 2017 release, only a closed initial contact could maintain a cohesive behavior by default settings. Any open contact cannot be converted to cohesive contact later. However, it is possible to change the default settings.



  • Linear complementary problem

A new step feature has been defined to remove some limitations of perturbation step. In earlier releases, it was not possible to define a contact in perturbation step that changes its status from open to close or vice versa. In 2017 release, an LCP type technique has been introduced in perturbation step to define frictionless, small sliding contact that could change its contact status. No other forms of non-linearity can be supported in perturbation steps.  LCP is available only for static problems. Any dynamic step is not supported.


Updates in Abaqus XFEM (crack modeling) […]

The Dassault Systèmes SIMULIA portfolio releases new versions of its software products every year, and this year is no different. The first release of Abaqus 2017 is now available for download at the media download portal. In this blog post, I provide a brief highlight of updates in Abaqus CAE 2017. A more detailed explanation of any mentioned update, or answers to further questions, can be obtained either by listening to the webinar recordings at SIMULIA 3D Experience user community portal, leaving a comment on this post, or contacting us.

  • Consistency check for missing sections

Abaqus CAE users would probably agree that this mistake happens quite often, even though the parts with defined section assignments are displayed in a separate color. In previous releases, this check was not included in data check runs, so it was not possible to detect this error unless a full run was executed. In the 2017 release, missing regions can be identified in a data check run, thus saving time by eliminating fatal error runs.



  • New set and surface queries in query toolset

The sets and surfaces can be created at part as well as assembly level. In earlier releases, it was not possible to see the content of a set or surface in the form of text, though it was possible to visualize the content in the viewport. In the 2017 release, query toolbox includes set and surface definition options. In case of sets, information about geometry, nodes, and elements can be obtained with respect to label, type, connectivity and association with part or instance, whichever is applicable. In case of surfaces, name, type, and association with instances, constraints, or interactions can be obtained.



  • Geometry face normal query

In the 2017 release, it is possible to visualize the normal of the face or surface by picking it in the viewport. In case of planar faces, normal is displayed instantly. In case of curved faces, CAE prompts the user to pick a point location on the face by various options.



417px-the_tortoise_and_the_hare_-_project_gutenberg_etext_19994In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead.

— Aristotle, Physics VI:9, 239b15

This paradox, as first developed by Zeno, and later retold by Aristotle, shows us that mathematical theory can be disproved by taking the hypothesis to an absurd conclusion.  To look at it another way, consider this joke:

A mathematician and scientist are trapped in a burning room.

The mathematician says “We’re doomed! First we have to cover half the distance between where we are and the door, then half the distance that remains, then half of that distance, and so on. The series is infinite.  There’ll always be some finite distance between us and the door.”

The engineer starts to run and says “Well, I figure I can get close enough for all practical purposes.”

The principle here, as it relates to simulation like FEA, is that every incremental step taken in the simulation process gets us closer to our ultimate goal of understanding the exact behavior of the model under a given set of circumstances. However, there is a limit at which we have diminishing returns and a physical prototype must be built. This evolution of simulating our designs has saved a lot of money for manufacturers who, in the past, would have had to build numerous, iterative physical prototypes. This evolution of FEA reminds me of…

2000px-mori_uncanny_valley-svgThe uncanny valley is the idea that as a human representation (robot, wax figures, animations, 3D models, etc.) increases in human likeness, the more affinity people will have towards the representation. That is, however, until a certain point.  Once this threshold is crossed, our affinity for it drops off to the point of revulsion, as in the case of zombies, or the “intermediate human-likeness” prosthetic hands.  However, as the realism continues to increase, the affinity will, in turn, start to rise.

Personally, I find this fascinating – that a trend moving through time can abruptly change direction, and then, for some strange reason, the trend reverts to its original direction. Why does this happen? There are myriad speculations as to why in the Wikipedia page that I’ll encourage the reader to peruse at leisure.

elmer-pump-heatequationBut to tie this back to FEA, think of the beginning of the Uncanny Valley curve as the start of computer assisted design simulation. The horizontal axis is time, vertical axis is accuracy.  I posit that over time, as simulating software has improved, the accuracy of our simulations has also increased. As time has gone on, the ease of use has also improved, allowing non-doctorate holders to utilize simulation as part of their design process.

And this is where we see the uncanny valley; as good as the software is, there comes a point, if you use specialized, intricate, or non-standard analysis, where the accuracy of the software falters. This tells us that there will still be needs for those PhDs, and once they get on the design and start using the software, we see the accuracy go up exponentially.

If you need help getting to the door, or navigating the valley, talk to us about our Simulation benchmark process. Leave a comment or click to contact us.


In the years to come, fuel efficiency and reduced emissions will be key factors in determining success within the transportation & mobility industry. Fuel economy is often directly associated with the overall weight of the vehicle. Composite materials have been widely used in the aerospace industry for many years to achieve the objectives of light weight and better performance at the same time.

The transportation & mobility industry has been following the same trends, and it is not uncommon to see the application of composites in this industry sector nowadays; however, unlike the aerospace industry, wide application of composites instead of metals is not feasible in the automotive industry. Hence, apart from material replacement, other novice methods to design and manufacture lightweight structures without compromise in performance will find greater utilization in this segment. In this blog post, I will discuss the application of TOSCA, a finite element based optimization technology.

The lightweight design optimization using virtual product development approach is a two-step process: concept design followed by improved design.

Design concept: The product development costs are mainly determined in the early concept phase. The automatic generation of optimized design proposals will reduce the number of product development cycles and the number of physical prototypes; quality is increased and development costs are significantly reduced. All you need is the definition of the maximum allowed design space – Tosca helps you to find the lightest design that fits and considers all system requirements. The technology associated with the concept design phase is called topology optimization that considers all design variables and functional constraints in optimization cycle while chasing the minimum weight objective function. The technique is iterative that often converges to a best optimal design.


The user starts with an initial design by defining design space, design responses, and objective function. Design space is the region from where material removal is allowed in incremental steps and objective function is often the overall weight of the component that has to be optimized. With each incremental removal of material, the performance of the component changes. Hence each increment of Tosca is followed by a finite element analysis to check existing performance against target performance. If target performance criteria is satisfied, the updated design increment is acceptable and TOSCA proceeds to the next increment. This process of incremental material removal is continued until the objective function is satisfied or no further design improvement is feasible. The image below depicts a complete CAD to CAD process flow in Tosca. The intermediate processes include TOSCA pre-processing, TOSCA and a finite element code based co-simulation and TOSCA post processing.

Tosca workflow

During the material removal process, TOSCA may be asked to perform the optimization that provides a feasible solution not only from a design perspective but from a manufacturing perspective as well. For example, TOSCA may be asked to recommend only those design variations that can be manufactured using casting and stamping processes. This is possible by defining one or more of manufacturing constraints available in TOSCA constraints library.

manufacturing constraints

While the topology optimization is applicable only on solid structures, it does not mean TOSCA cannot perform optimization on sheet metal parts. The sizing optimization module of TOSCA allows users to define thickness of sheet metal parts as design variables with a lower bound and an upper bound. […]

© Tata Technologies 2009-2015. All rights reserved.