Posts

This post was originally written in January of 2017.

With all the buzz about Additive Manufacturing, or 3D Printing, in the manufacturing world today, there is a lot of mystery and confusion surrounding common practices and techniques. This week’s blog post will address a common type of 3D printing known as Electron Beam Freeform Fabrication (EBF³) .

What is Electron Beam Freeform Fabrication?

It is actually part of a broader category, commonly referred to as a Filament Extrusion Techniques. Filament extrusion techniques all utilize a thin filament or wire of material. The material, typically a thermoplastic polymer, is forced through a heating element, and is extruded out in a 2D cross-section on a platform. The platform is lowered and the process is repeated until a part is completed. In most commercial machines, and higher-end consumer grade machines, the build area is typically kept at an elevated temperature to prevent part defects. The most common, and the first, technology of this type to be developed is Fused Deposition Modeling.

The Fused Deposition Modeling Technique was developed by S. Scott Crump, co-founder of Stratasys, Ltd. in the late 1980s. The technology was then patented in 1989. The patent for FDM expired in the early 2000s. This helped to give rise to the Maker movement by allowing other companies to commercialize the technology.

Electron Beam Freeform Fabrication, or EBF³ is one of the newest forms of rapid prototyping. This technique is performed with a focused electron beam and a metal wire or filament. The wire is fed through the electron beam to create a molten pool of metal. The material solidifies instantaneously once the electron beam passes through, and is able to support itself (meaning support structures generally aren’t required). This entire process must be executed under a high vacuum.

Pioneered by NASA Langley Research Center, this process is capable of producing incredibly accurate parts at full density (other additive manufacturing techniques have trouble achieving, or require secondary operations to achieve similar results). This is also one of the only techniques that can be successfully performed in zero gravity environments.

What Are the Advantages of this Process? […]

Abaqus has always been first choice of analysts for modeling any form of non-linearity in the model: geometric non-linearity, material non-linearity, or boundary condition non-linearity which is large sliding contact. Within material non-linearity, the most popular model is piecewise linear plasticity used to model plastic deformations in alloys or metals beyond their yield point. This blog post primarily discusses another powerful but somewhat less known non-linear material model of Abaqus used to model elastomers or rubbers.

Before getting into Abaqus’ functionalities for rubbers, let’s see what types of rubbers primarily exist, along with their mechanical characteristics:

Solid Rubbers

They exist almost everywhere: tires, weather seals, oil seals, civil engineering equipment, etc. Their main mechanical characteristics are

  • Nearly incompressible: While it is easy to stretch these materials, it is very difficult to compress them volumetrically. It’s a common observation that a rubber band can be stretched easily but a piece of pencil eraser cannot be compressed so easily. This behavior is particularly important in elastomer modeling.
  • Progressive loading and unloading cycles show hysteresis as well as damage. As cycles continue, damage progresses.

Thermoplastics

They are a physical combination of rubber materials and thermal plastics. They can be easily molded or extruded. They are not physically as strong as solid rubbers, neither resistant to heat and chemicals. They are more prone to creep and permanent set.

Elastomeric foams

Commercially, they are referred to as porous rubbers or just foams.

  • They can undergo very large strain, as large as 500% that is still recoverable. Their counterparts, crushable foams, can exhibit inelastic strains.
  • They exhibit cellular structure that may be open or closed type. Typical examples are cushions, paddings, etc.
  • The compressive stress strain curve is as follows:

Foams exhibit a linear behavior in a compressive strain range of 0% to 5%. Subsequently, there is a plateau of severe deformation at almost constant stress. In this region, the walls and plates of cells buckle under compression thereby forming a denser structure. Post buckling, the cellular walls and plates start interacting with each other, causing a gradual increase in compressive stress.

  • Due to high porosity, foams exhibit very large axial compressive strain without any lateral strain. Due to this, the Poisson’s ratio of foams is nearly zero. This behavior is critical for material modeling of foams in Abaqus.

Material models in Abaqus for rubbers

Abaqus uses the “hyperelastic materials” terminology for its material libraries that support rubbers. This is primarily because rubbers are elastic in nature even at very high strains. The basic assumptions in modeling solid rubbers are: elastic, isotropic and nearly incompressible. Foam material libraries in Abaqus are referred as “hyperfoam” and are highly incompressible. None of the rubber material models can be represented by a single coefficient such as modulus. It rather requires a strain energy density function that can have an infinite number of terms. Therefore, in Abaqus, strain energy functions have specific forms with certain numbers of parameters to be determined. Each of these function is associated with a separate material model, as shown below. […]

Are you faced with a complex data migration or translation? Do you have years of legacy data that needs to be migrated to a new system? Have you got old CAD data from a outdated system that is still being used?

If you have answered yes to any of these questions, you are facing the prospect of performing a migration or translation project. Here are 10 potential problems that you must look out for before starting:

  1.  Underestimation of effort – too many projects are underestimated, primarily because the use cases for the translation are thought to be simpler then they actually are. For example, assemblies only need translation until someone remembers that drawings need to be included.
  2.  “Everything” syndrome – Looking at a project, most organizations default to attempting to translate or migrate everything. In all cases, this is not necessary, as only a subset of the data is really relevant. Making this mistake can drive up both cost and complexity dramatically.
  3.  Duplicate data – of everything that needs to be moved, how much of it is duplicate data (or same data in slightly different forms)? Experience shows that duplicate data percentages can be as high as 20 to 30 %. Unfortunately, identifing these duplicates can be difficult, but there are techniques to overcome this problem
  4.  Accuracy of CAD translation – When looking at 3D CAD translations, how accurate a copy do the translated models need to be relative to the originals? Again, a blanket requirement of “identical” can drive up cost and complexity hugely. Some lesser target (say +- 2 mm) can improve success.
  5.  Data already exists in Target – Some level of informal manual migration may have already occurred. So, when a formal migration is performed, data “clashes” can occur and result in failures or troublesome duplicates.
  6.  Automatic is not always best – Developing an automated migration or translation tool can be costly, if the requirements are multiple. Sometimes, a manual approach is more cost-effective for smaller and simpler cases.
  7.  Data Enrichment – Because the source data was created in an older system, it may not have all the properties and data that the target system requires. In this case, these have to be added during the migration or translation process. Forgetting about this step will prevent users from accurately finding data later.
  8.  Loss of Data – For large data volumes, is it possible that some of the data is missed and deleted during the project? Very possible – to prevent this requires exhaustive testing and planning.
  9.  Archive Solution – Once the translation or migration is complete, what happens to the original data? In some cases it is possible to delete it. However, in some environments (e.g. regulatory situations) this may not be allowed. In such a case, has an archive solution been put in place?
  10.  Security – Legacy data may be subject to security (ITAR, competitive data, etc.). Does the migration or translation process expose sensitive information to unauthorized users? Often a process will take the data out of its protected environment. This problem has to be considered and managed.

Ask these questions before translations and migrations begin!

This post was originally created in January 2017.

With all the buzz about Additive Manufacturing, or 3D Printing, in the manufacturing world today, there is a lot of mystery and confusion surrounding the common practices and techniques. So, this week’s blog post will address a common type of 3D printing known as Electron Beam Melting (EBM).

What is Electron Beam Melting?

It is actually part of a broader category, commonly referred to as a Granular Based Technique. All granular based additive manufacturing techniques start with a bed of powdered material. A laser beam or bonding agent joins the material in a cross section of the part. Then the platform beneath the bed of material is lowered, and a fresh layer of material is brushed over the top of the cross section. The process is then repeated until a complete part is produced. The first commercialized technique of this category is known as Selective Laser Sintering.

The Selective Laser Sintering Technique was developed in the mid-1980s by Dr. Carl Deckard and Dr. Joseph Beaman and the University of Texas at Austin, under DARPA sponsorship. As a result of this, Deckard and Beaman established the DTM Corporation with the explicit purpose of manufacturing SLS machines, and in 2001 DTM was purchased by their largest competitor, 3D systems.

Electron Beam Melting is very similar to Selective Laser Melting, though there are a few distinct differences. EBM uses an electron beam to create a molten pool of material, to create cross-sections of a part. The material solidifies instantaneously once the electron beam passes through it. In addition, this technique must be performed in a vacuum. This is one of the few additive manufacturing techniques that can create full density parts.

What Are the Advantages of this Process?

EBM is quick; it’s one of the fastest rapid prototyping techniques (though, relatively speaking, most techniques are fast). In addition, it can potentially be one of the most accurate rapid prototyping processes, the major limiting factor being the particle size of the powdered material.

As mentioned previously, this is one of the only additive manufacturing techniques that yields full-density parts; this means parts created with EBM will have similar properties to parts created using traditional manufacturing processes.

Another advantage of the material bed is the ability to stack multiple parts into the build envelope. This can greatly increase the throughput of an EBM machine.

What Are the Disadvantages of this Process? […]

Siemens PLM‘s robust FEA solver NX Nastran is offered in multiple flavors. At first, it is associated with multiple graphical user interfaces, and the right choice depends on the user’s existing inventory as well as technical resources available. There are three options to explore:

  • Basic designer-friendly solution: In this bundle, basic NX Nastran capabilities are embedded in the NX CAD environment. The environment also offers stress and frequency solution wizards that provide direction to the user throughout the workflow. This solution is primarily meant for designers who wish to perform initial FEA inquiry on simple models. Advanced solver and meshing functionalities are not available.
  • Advanced solution for analysts: This solution offers more features with more complexity, so it is not meant for novice users and requires prior understanding of FEA technology. There are two separate GUIs associated with this type of NX Nastran.
  • NX CAE based solver: This is a dedicated pre/post processor for FEA modeling that has its own look and feel. It looks different from NX CAD but it is tightly coupled with NX CAD in terms of associativity – hence any updates in the CAD model are quickly updated in the FEA model as well through synchronous technology. If required, it is possible to associate this solution with Siemens Teamcenter for simulation process management.
  • FEMAP based solver: This is yet another dedicated PC based pre/post processor from Siemens with its own look and feel. FEMAP offers a CAD neutral and solver neutral FEA environment. It is tightly coupled with the NX Nastran solver but it is also possible to generate input decks for Abaqus, ANSYS, LS-Dyna, Sinda, etc.

This explains all the possible GUI offerings for NX Nastran. Now let’s have a look at what functionalities are available within the NX Nastran solver. Veteran Nastran users know very well that various physics-based solver features of Nastran are called solution sequences and each one of those is associated with a number.

  • Solution sequence 101: This is the most popular sequence of Nastran family. It primarily offers linear static functionalities to model linear materials, including directional materials such as composites for small deformation problems. Basic contact features such as GAP elements are also included. This sequence is widely used in T&M and aerospace verticals.
  • Solution sequence 103: This is yet another popular solution sequence that extracts natural frequencies of parts and assemblies. Multiple algorithms are available for frequency extraction such as AMS and Lancoz. This sequence serves as a precursor for full-blown dynamics analysis in Nastran.
  • Solution sequence 105: This sequence offers linear buckling at the part and assembly level. A typical output is buckling factor as well as buckling eigen vector. The buckling factor is a single numerical value which is a measure of buckling force. Eigen vectors predicts the buckling shape of the structure.
  • Solution sequence 106: This sequence introduces basic non-linear static capabilities in the solution and Nastran 101 is a prerequisite for this sequence. It supports large deformations, metal plasticity as well as hyper elasticity. Large sliding contact is also available but it is preferable to limit the contact modeling to 2D models only; it is tedious to define contact between 3D surfaces in this sequence.
  • Solution sequences 108,109,111,112: All these solution sequences are used to model dynamic response of structure in which inertia as well as unbalanced forces and accelerations are taken into consideration. These solution sequences are very robust, which makes Nastran the first choice dynamic solver in the aerospace world. Sequences 108 and 111 are frequency-based, which means that inputs/outputs are provided in a frequency range specified by the user. The solution scheme can be either direct or modal. Sequences 109 and 112 are transient or time-based which means inputs/outputs are provided as a function of time and scheme can be either direct or modal.
  • Solution sequences 153, 159: These are thermal simulation sequences: 153 is steady state and 159 is transient. Each one of these takes thermal loads such as heat flux as inputs and provides temperature contours as outputs. They do not include fluid flow but can be used in conjunction with NX flow solver to simulate conjugate heat transfer flow problems.
  • Solution sequence 200: This is a structural optimizer that includes topology and shape optimization modules for linear models. An optimization solver is not an FEA solver, but works in parallel with the FEA solver at each optimization iteration, hence sequence 101 is a prerequisite for NX Nastran optimization. Topology and shape optimizations often have different objectives; topology optimization is primarily used in lightweight design saving material costs while shape optimization is used for stress homogenization and hot spot elimination.

Questions? Thoughts? Leave a comment and let me know.

Today we will continue our series on the hidden intelligence of CATIA V5.  It is important to note that I am using a standard Classic HD2 license for this series In my last post, we discussed building a catalog of parts based on a single part that has a spreadsheet that drives the parameters with part numbers.  What about features?  If CATIA V5 is powerful enough to generate entire parts based on parameters, shouldn’t it also be able to be able to generate repetitive features? For instance, take a boss feature that appears on the B-Side of a plastic part. As a leader, I would not be interested in paying my designer his rates to keep repeatedly modeling a feature that may only change slightly throughout the backside! Model smarter: make once, use many times.

To do this successfully, you must address a few things – the first being how it may change. Of course you may not anticipate all changes, but a good rule of thumb is to try to model with maximum flexibility (big slabs for surfaces, overbuild everything, pay close attention to design intent) and do not use B-reps for your design. Avoid creating and building off of features CATIA builds, meaning whenever possible build your own and pick only from the tree to link to them.  The second issue to address is – what are going to be the parametric numerical inputs to drive the design? See my first post in this series on how to set these up.  i.e. Draft Angle, Wall thickness, Outer Diameter, etc.

Finally, what are going to be the geometric inputs to drive the design?  i.e. Location point, Pull Line, Slide Line, Mating Surface, etc.  A good rule of thumb here is to limit these features to as few as possible that are needed to get the job done. Sometimes it may be beneficial to sketch all this out on paper before you build it; I suggest gathering input from all the possible parties to help you in your definition.

In the example below, I have constructed a boss. Let’s review what I did. […]

This post was originally created in January 2017.

With all the buzz about Additive Manufacturing, or 3D Printing, in the manufacturing world today, there is a lot of mystery and confusion surrounding the common practices and techniques. So, this week’s blog post will address a common type of 3D printing known as Selective Laser Melting (SLM).

What is Selective Laser Melting?

It is actually part of a broader category, commonly referred to as a Granular Based Technique. All granular based additive manufacturing techniques start with a bed of a powdered material. A laser beam or bonding agent joins the material in a cross-section of the part. Then the platform beneath the bed of material is lowered, and a fresh layer of material is brushed over the top of the cross-section. The process is then repeated until a complete part is produced. The first commercialized technique of this category is known as Selective Laser Sintering.

The Selective Laser Sintering Technique was developed in the mid-1980s by Dr. Carl Deckard and Dr. Joseph Beaman and the University of Texas at Austin, under DARPA sponsorship. As a result of this, Deckard and Beaman established the DTM Corporation with the explicit purpose of manufacturing SLS machines; in 2001, DTM was purchased by their largest competitor, 3D Systems.

SLM is a similar process to SLS, though there are some important differences. Instead of the substrate being sintered, it is melted to fuse layers together. This is typically done in a chamber with an inert gas (usually Nitrogen or Argon), with incredibly low levels of oxygen (below 500 parts per million). This is to prevent any unwanted chemical reactions when the material changes its physical state. This technique yields higher density parts than any sintering process.

What Are the Advantages of this Process?

SLM is quick; it is one of the fastest rapid prototyping techniques (hough, relatively speaking, most techniques are fast). In addition, it can potentially be one of the most accurate rapid prototyping processes, the major limiting factor being the particle size of the powdered material.

As mentioned previously, this technique yields higher density parts than other additive manufacturing techniques, making for a much stronger part.

Another advantage of the material bed is the ability to stack multiple parts into the build envelope. This can greatly increase the throughput of a DMLS machine.

What Are the Disadvantages of this Process? […]

Additive manufacturing is not a new technology – it was introduced in the manufacturing industry in late 80s for very niche applications. Stereolithography, a variant of additive manufacturing, was introduced in 1986 for rapid prototyping applications; however, its true potential remained hidden for a long time. Additive manufacturing primarily refers to methods of creating a part or a tool using a layered approach. As a still-evolving technology, it now covers a family of processes such as material extrusion, material jetting, direct energy deposition, power bed fusion, and more.

Additive manufacturing expands design possibilities by eliminating many manufacturing constraints. Contrary to rapid prototyping and 3D printing, there has been a shift of focus to functional requirements in additive manufacturing; however, these functional requirements may deviate from what is expected due to many factors typical of an additive manufacturing process.

  • Change in material properties: Mechanical and thermal properties of a manufactured part differ from raw material properties. This happens due to material phase change which is typical to most additive manufacturing applications.
  • Cracking and failure: The process itself generates lots of heat that produces residual stresses due to thermal expansion. These stresses can cause cracks in material during manufacturing.
  • Distortion: Thermal stresses can lead to distortion that can make the part unusable.

The additive manufacturing process is not certifiable yet, which is a major barrier in widespread adoption of these processes commercially. The ASTM F42 committee is working on defining AM standards with respect to materials, machines, and process variables.

The role of Simulation in additive manufacturing

  • Functional design: The first objective is to generate a suitable design that meets functional requirements, then subsequently improve the design through optimization methodologies that work in parallel with simulation.
  • Generate a lattice structure: Many of the parts manufactured through AM have a lattice structure instead of a full continuum. One objective of simulation in AM is to generate a lattice structure and optimize it using sizing optimization.
  • Calibrate material: As mentioned before, the material properties of a final part can differ substantially from that of the raw material. The next objective is to capture the phase transformation process through multi-scale material modeling.
  • Optimize the AM process: Unwanted residual stresses and distortions can develop in the process. It is necessary to accurately capture these physical changes to minimize the gap between the as-designed and as-manufactured part specs.
  • In service performance: Evaluate how the manufactured part will perform under real life service loads with respect to stiffness, fatigue, etc.

 

Now let’s discuss each of these objectives in more detail, with respect to SIMULIA. […]

Read Part 1 here.

So, what does a structured process to data migration and translation look like?

First a few definitions:

  • Source system – the origin of the data that needs to be translated or migrated. This could be a database or a directory structure.
  • Target system – the final destination for the data. On completion of the process, data in the target should be in the correct format.
  • Verify – Ensure that data placed in the target system is complete, accurate, and meets defined standards.
  • Staging area – an interim location where data is transformed, cleaned, or converted before being sent to the target.

The process consists of five steps as shown below:

picture1The process can be described as follows:

  • Data to be migrated is identified in the source system. This is an important step and ensures that only relevant data is moved. Junk data is left behind.
  • The identified data is extracted from the source system and placed in the staging area.
  • The data is then transformed into a format ready for the target system. Such transformation could be a CAD to CAD translation, a metadata change, or a cleaning process. Transformation may also entail data enrichment – for example, append additional properties to the objects so they can be better found in the target system.
  • Transformed data is then loaded into the target system. This can be done automatically via programs or manually, dependent on the chosen method. Automatic routines can fail and these are flagged for analysis and action.
  • Once data is loaded, validation is carried out to ensure that the migrated data is correct in the target system and not corrupted in some fashion.

The process as described above is shown at working level:

picture2

Shown in this diagram are two software tools – extractors and loaders. These are usually custom utilities that use APIs, or hooks into the source and target systems, to move the identified data. For example, an extractor tool may query a source PLM system for all released and frozen data that was released after a given date. Once this search is complete, the data identified by this will be downloaded by the extractor from the PLM system into the staging area.

In a similar manner, a loader will execute against a correct data set in the staging area and insert this into a target system, creating the required objects and adding the files.

It is highly recommended that pilot migrations be carried out on test data in developmental environments to verify the process. This testing will identify potential bugs and allow them to be fixed before actual data is touched.

Such a structured process will guarantee success!

PDF Publishing

‘Nuff said.

*and there was much rejoicing*

Well, maybe I could add a little more detail. It has long been known that the PDF is the currency of visual data exchange. All too often, I work with users and organizations that have to print PDFs outside of Vault, creating an uncontrolled document. If you were using the item master (discussed by my colleague here), you could attach it to the item; however, keeping it up to date is still going to be a manual process.

Now, thanks to the #1 most requested feature being implemented, that will no longer be an issue. Vault will now publish PDFs as part of your release process (as part of a transition action in a lifecycle change). This file will be categorized differently than the native CAD file, or even the DWF visualization file. The new category is called “Design Representation,” which can then be assigned its own set of rules, properties, and lifecycles.

As of this release, we have the ability to publish 2D file formats: DWG and IDW; that means either AutoCAD based files or Inventor drawings can be published to PDF. At some point, Autodesk may need to add the 3D PDF generation that was added to Inventor recently – which, by the by, could be used to publish all of the new Model Based Definition (MBD) annotations Inventor 2018 has added. I suspect we could see 3D publishing in the next release, or even a mid-year “R2” release (if there is an “R2;” who knows at this point).

Questions, comments, and celebrations welcome.

© Tata Technologies 2009-2015. All rights reserved.