Are you faced with a complex data migration or translation? Do you have years of legacy data that needs to be migrated to a new system? Have you got old CAD data from a outdated system that is still being used?

If you have answered yes to any of these questions, you are facing the prospect of performing a migration or translation project. Here are 10 potential problems that you must look out for before starting:

  1.  Underestimation of effort – too many projects are underestimated, primarily because the use cases for the translation are thought to be simpler then they actually are. For example, assemblies only need translation until someone remembers that drawings need to be included.
  2.  “Everything” syndrome – Looking at a project, most organizations default to attempting to translate or migrate everything. In all cases, this is not necessary, as only a subset of the data is really relevant. Making this mistake can drive up both cost and complexity dramatically.
  3.  Duplicate data – of everything that needs to be moved, how much of it is duplicate data (or same data in slightly different forms)? Experience shows that duplicate data percentages can be as high as 20 to 30 %. Unfortunately, identifing these duplicates can be difficult, but there are techniques to overcome this problem
  4.  Accuracy of CAD translation – When looking at 3D CAD translations, how accurate a copy do the translated models need to be relative to the originals? Again, a blanket requirement of “identical” can drive up cost and complexity hugely. Some lesser target (say +- 2 mm) can improve success.
  5.  Data already exists in Target – Some level of informal manual migration may have already occurred. So, when a formal migration is performed, data “clashes” can occur and result in failures or troublesome duplicates.
  6.  Automatic is not always best – Developing an automated migration or translation tool can be costly, if the requirements are multiple. Sometimes, a manual approach is more cost-effective for smaller and simpler cases.
  7.  Data Enrichment – Because the source data was created in an older system, it may not have all the properties and data that the target system requires. In this case, these have to be added during the migration or translation process. Forgetting about this step will prevent users from accurately finding data later.
  8.  Loss of Data – For large data volumes, is it possible that some of the data is missed and deleted during the project? Very possible – to prevent this requires exhaustive testing and planning.
  9.  Archive Solution – Once the translation or migration is complete, what happens to the original data? In some cases it is possible to delete it. However, in some environments (e.g. regulatory situations) this may not be allowed. In such a case, has an archive solution been put in place?
  10.  Security – Legacy data may be subject to security (ITAR, competitive data, etc.). Does the migration or translation process expose sensitive information to unauthorized users? Often a process will take the data out of its protected environment. This problem has to be considered and managed.

Ask these questions before translations and migrations begin!

This post was originally created in January 2017.

With all the buzz about Additive Manufacturing, or 3D Printing, in the manufacturing world today, there is a lot of mystery and confusion surrounding the common practices and techniques. So, this week’s blog post will address a common type of 3D printing known as Electron Beam Melting (EBM).

What is Electron Beam Melting?

It is actually part of a broader category, commonly referred to as a Granular Based Technique. All granular based additive manufacturing techniques start with a bed of powdered material. A laser beam or bonding agent joins the material in a cross section of the part. Then the platform beneath the bed of material is lowered, and a fresh layer of material is brushed over the top of the cross section. The process is then repeated until a complete part is produced. The first commercialized technique of this category is known as Selective Laser Sintering.

The Selective Laser Sintering Technique was developed in the mid-1980s by Dr. Carl Deckard and Dr. Joseph Beaman and the University of Texas at Austin, under DARPA sponsorship. As a result of this, Deckard and Beaman established the DTM Corporation with the explicit purpose of manufacturing SLS machines, and in 2001 DTM was purchased by their largest competitor, 3D systems.

Electron Beam Melting is very similar to Selective Laser Melting, though there are a few distinct differences. EBM uses an electron beam to create a molten pool of material, to create cross-sections of a part. The material solidifies instantaneously once the electron beam passes through it. In addition, this technique must be performed in a vacuum. This is one of the few additive manufacturing techniques that can create full density parts.

What Are the Advantages of this Process?

EBM is quick; it’s one of the fastest rapid prototyping techniques (though, relatively speaking, most techniques are fast). In addition, it can potentially be one of the most accurate rapid prototyping processes, the major limiting factor being the particle size of the powdered material.

As mentioned previously, this is one of the only additive manufacturing techniques that yields full-density parts; this means parts created with EBM will have similar properties to parts created using traditional manufacturing processes.

Another advantage of the material bed is the ability to stack multiple parts into the build envelope. This can greatly increase the throughput of an EBM machine.

What Are the Disadvantages of this Process? […]

Siemens PLM‘s robust FEA solver NX Nastran is offered in multiple flavors. At first, it is associated with multiple graphical user interfaces, and the right choice depends on the user’s existing inventory as well as technical resources available. There are three options to explore:

  • Basic designer-friendly solution: In this bundle, basic NX Nastran capabilities are embedded in the NX CAD environment. The environment also offers stress and frequency solution wizards that provide direction to the user throughout the workflow. This solution is primarily meant for designers who wish to perform initial FEA inquiry on simple models. Advanced solver and meshing functionalities are not available.
  • Advanced solution for analysts: This solution offers more features with more complexity, so it is not meant for novice users and requires prior understanding of FEA technology. There are two separate GUIs associated with this type of NX Nastran.
  • NX CAE based solver: This is a dedicated pre/post processor for FEA modeling that has its own look and feel. It looks different from NX CAD but it is tightly coupled with NX CAD in terms of associativity – hence any updates in the CAD model are quickly updated in the FEA model as well through synchronous technology. If required, it is possible to associate this solution with Siemens Teamcenter for simulation process management.
  • FEMAP based solver: This is yet another dedicated PC based pre/post processor from Siemens with its own look and feel. FEMAP offers a CAD neutral and solver neutral FEA environment. It is tightly coupled with the NX Nastran solver but it is also possible to generate input decks for Abaqus, ANSYS, LS-Dyna, Sinda, etc.

This explains all the possible GUI offerings for NX Nastran. Now let’s have a look at what functionalities are available within the NX Nastran solver. Veteran Nastran users know very well that various physics-based solver features of Nastran are called solution sequences and each one of those is associated with a number.

  • Solution sequence 101: This is the most popular sequence of Nastran family. It primarily offers linear static functionalities to model linear materials, including directional materials such as composites for small deformation problems. Basic contact features such as GAP elements are also included. This sequence is widely used in T&M and aerospace verticals.
  • Solution sequence 103: This is yet another popular solution sequence that extracts natural frequencies of parts and assemblies. Multiple algorithms are available for frequency extraction such as AMS and Lancoz. This sequence serves as a precursor for full-blown dynamics analysis in Nastran.
  • Solution sequence 105: This sequence offers linear buckling at the part and assembly level. A typical output is buckling factor as well as buckling eigen vector. The buckling factor is a single numerical value which is a measure of buckling force. Eigen vectors predicts the buckling shape of the structure.
  • Solution sequence 106: This sequence introduces basic non-linear static capabilities in the solution and Nastran 101 is a prerequisite for this sequence. It supports large deformations, metal plasticity as well as hyper elasticity. Large sliding contact is also available but it is preferable to limit the contact modeling to 2D models only; it is tedious to define contact between 3D surfaces in this sequence.
  • Solution sequences 108,109,111,112: All these solution sequences are used to model dynamic response of structure in which inertia as well as unbalanced forces and accelerations are taken into consideration. These solution sequences are very robust, which makes Nastran the first choice dynamic solver in the aerospace world. Sequences 108 and 111 are frequency-based, which means that inputs/outputs are provided in a frequency range specified by the user. The solution scheme can be either direct or modal. Sequences 109 and 112 are transient or time-based which means inputs/outputs are provided as a function of time and scheme can be either direct or modal.
  • Solution sequences 153, 159: These are thermal simulation sequences: 153 is steady state and 159 is transient. Each one of these takes thermal loads such as heat flux as inputs and provides temperature contours as outputs. They do not include fluid flow but can be used in conjunction with NX flow solver to simulate conjugate heat transfer flow problems.
  • Solution sequence 200: This is a structural optimizer that includes topology and shape optimization modules for linear models. An optimization solver is not an FEA solver, but works in parallel with the FEA solver at each optimization iteration, hence sequence 101 is a prerequisite for NX Nastran optimization. Topology and shape optimizations often have different objectives; topology optimization is primarily used in lightweight design saving material costs while shape optimization is used for stress homogenization and hot spot elimination.

Questions? Thoughts? Leave a comment and let me know.

Today we will continue our series on the hidden intelligence of CATIA V5.  It is important to note that I am using a standard Classic HD2 license for this series In my last post, we discussed building a catalog of parts based on a single part that has a spreadsheet that drives the parameters with part numbers.  What about features?  If CATIA V5 is powerful enough to generate entire parts based on parameters, shouldn’t it also be able to be able to generate repetitive features? For instance, take a boss feature that appears on the B-Side of a plastic part. As a leader, I would not be interested in paying my designer his rates to keep repeatedly modeling a feature that may only change slightly throughout the backside! Model smarter: make once, use many times.

To do this successfully, you must address a few things – the first being how it may change. Of course you may not anticipate all changes, but a good rule of thumb is to try to model with maximum flexibility (big slabs for surfaces, overbuild everything, pay close attention to design intent) and do not use B-reps for your design. Avoid creating and building off of features CATIA builds, meaning whenever possible build your own and pick only from the tree to link to them.  The second issue to address is – what are going to be the parametric numerical inputs to drive the design? See my first post in this series on how to set these up.  i.e. Draft Angle, Wall thickness, Outer Diameter, etc.

Finally, what are going to be the geometric inputs to drive the design?  i.e. Location point, Pull Line, Slide Line, Mating Surface, etc.  A good rule of thumb here is to limit these features to as few as possible that are needed to get the job done. Sometimes it may be beneficial to sketch all this out on paper before you build it; I suggest gathering input from all the possible parties to help you in your definition.

In the example below, I have constructed a boss. Let’s review what I did. […]

This post was originally created in January 2017.

With all the buzz about Additive Manufacturing, or 3D Printing, in the manufacturing world today, there is a lot of mystery and confusion surrounding the common practices and techniques. So, this week’s blog post will address a common type of 3D printing known as Selective Laser Melting (SLM).

What is Selective Laser Melting?

It is actually part of a broader category, commonly referred to as a Granular Based Technique. All granular based additive manufacturing techniques start with a bed of a powdered material. A laser beam or bonding agent joins the material in a cross-section of the part. Then the platform beneath the bed of material is lowered, and a fresh layer of material is brushed over the top of the cross-section. The process is then repeated until a complete part is produced. The first commercialized technique of this category is known as Selective Laser Sintering.

The Selective Laser Sintering Technique was developed in the mid-1980s by Dr. Carl Deckard and Dr. Joseph Beaman and the University of Texas at Austin, under DARPA sponsorship. As a result of this, Deckard and Beaman established the DTM Corporation with the explicit purpose of manufacturing SLS machines; in 2001, DTM was purchased by their largest competitor, 3D Systems.

SLM is a similar process to SLS, though there are some important differences. Instead of the substrate being sintered, it is melted to fuse layers together. This is typically done in a chamber with an inert gas (usually Nitrogen or Argon), with incredibly low levels of oxygen (below 500 parts per million). This is to prevent any unwanted chemical reactions when the material changes its physical state. This technique yields higher density parts than any sintering process.

What Are the Advantages of this Process?

SLM is quick; it is one of the fastest rapid prototyping techniques (hough, relatively speaking, most techniques are fast). In addition, it can potentially be one of the most accurate rapid prototyping processes, the major limiting factor being the particle size of the powdered material.

As mentioned previously, this technique yields higher density parts than other additive manufacturing techniques, making for a much stronger part.

Another advantage of the material bed is the ability to stack multiple parts into the build envelope. This can greatly increase the throughput of a DMLS machine.

What Are the Disadvantages of this Process? […]

Additive manufacturing is not a new technology – it was introduced in the manufacturing industry in late 80s for very niche applications. Stereolithography, a variant of additive manufacturing, was introduced in 1986 for rapid prototyping applications; however, its true potential remained hidden for a long time. Additive manufacturing primarily refers to methods of creating a part or a tool using a layered approach. As a still-evolving technology, it now covers a family of processes such as material extrusion, material jetting, direct energy deposition, power bed fusion, and more.

Additive manufacturing expands design possibilities by eliminating many manufacturing constraints. Contrary to rapid prototyping and 3D printing, there has been a shift of focus to functional requirements in additive manufacturing; however, these functional requirements may deviate from what is expected due to many factors typical of an additive manufacturing process.

  • Change in material properties: Mechanical and thermal properties of a manufactured part differ from raw material properties. This happens due to material phase change which is typical to most additive manufacturing applications.
  • Cracking and failure: The process itself generates lots of heat that produces residual stresses due to thermal expansion. These stresses can cause cracks in material during manufacturing.
  • Distortion: Thermal stresses can lead to distortion that can make the part unusable.

The additive manufacturing process is not certifiable yet, which is a major barrier in widespread adoption of these processes commercially. The ASTM F42 committee is working on defining AM standards with respect to materials, machines, and process variables.

The role of Simulation in additive manufacturing

  • Functional design: The first objective is to generate a suitable design that meets functional requirements, then subsequently improve the design through optimization methodologies that work in parallel with simulation.
  • Generate a lattice structure: Many of the parts manufactured through AM have a lattice structure instead of a full continuum. One objective of simulation in AM is to generate a lattice structure and optimize it using sizing optimization.
  • Calibrate material: As mentioned before, the material properties of a final part can differ substantially from that of the raw material. The next objective is to capture the phase transformation process through multi-scale material modeling.
  • Optimize the AM process: Unwanted residual stresses and distortions can develop in the process. It is necessary to accurately capture these physical changes to minimize the gap between the as-designed and as-manufactured part specs.
  • In service performance: Evaluate how the manufactured part will perform under real life service loads with respect to stiffness, fatigue, etc.

 

Now let’s discuss each of these objectives in more detail, with respect to SIMULIA. […]

Read Part 1 here.

So, what does a structured process to data migration and translation look like?

First a few definitions:

  • Source system – the origin of the data that needs to be translated or migrated. This could be a database or a directory structure.
  • Target system – the final destination for the data. On completion of the process, data in the target should be in the correct format.
  • Verify – Ensure that data placed in the target system is complete, accurate, and meets defined standards.
  • Staging area – an interim location where data is transformed, cleaned, or converted before being sent to the target.

The process consists of five steps as shown below:

picture1The process can be described as follows:

  • Data to be migrated is identified in the source system. This is an important step and ensures that only relevant data is moved. Junk data is left behind.
  • The identified data is extracted from the source system and placed in the staging area.
  • The data is then transformed into a format ready for the target system. Such transformation could be a CAD to CAD translation, a metadata change, or a cleaning process. Transformation may also entail data enrichment – for example, append additional properties to the objects so they can be better found in the target system.
  • Transformed data is then loaded into the target system. This can be done automatically via programs or manually, dependent on the chosen method. Automatic routines can fail and these are flagged for analysis and action.
  • Once data is loaded, validation is carried out to ensure that the migrated data is correct in the target system and not corrupted in some fashion.

The process as described above is shown at working level:

picture2

Shown in this diagram are two software tools – extractors and loaders. These are usually custom utilities that use APIs, or hooks into the source and target systems, to move the identified data. For example, an extractor tool may query a source PLM system for all released and frozen data that was released after a given date. Once this search is complete, the data identified by this will be downloaded by the extractor from the PLM system into the staging area.

In a similar manner, a loader will execute against a correct data set in the staging area and insert this into a target system, creating the required objects and adding the files.

It is highly recommended that pilot migrations be carried out on test data in developmental environments to verify the process. This testing will identify potential bugs and allow them to be fixed before actual data is touched.

Such a structured process will guarantee success!

PDF Publishing

‘Nuff said.

*and there was much rejoicing*

Well, maybe I could add a little more detail. It has long been known that the PDF is the currency of visual data exchange. All too often, I work with users and organizations that have to print PDFs outside of Vault, creating an uncontrolled document. If you were using the item master (discussed by my colleague here), you could attach it to the item; however, keeping it up to date is still going to be a manual process.

Now, thanks to the #1 most requested feature being implemented, that will no longer be an issue. Vault will now publish PDFs as part of your release process (as part of a transition action in a lifecycle change). This file will be categorized differently than the native CAD file, or even the DWF visualization file. The new category is called “Design Representation,” which can then be assigned its own set of rules, properties, and lifecycles.

As of this release, we have the ability to publish 2D file formats: DWG and IDW; that means either AutoCAD based files or Inventor drawings can be published to PDF. At some point, Autodesk may need to add the 3D PDF generation that was added to Inventor recently – which, by the by, could be used to publish all of the new Model Based Definition (MBD) annotations Inventor 2018 has added. I suspect we could see 3D publishing in the next release, or even a mid-year “R2” release (if there is an “R2;” who knows at this point).

Questions, comments, and celebrations welcome.

My last post outlined the significance of Product Cost Management (PCM) for OEMs and Suppliers to drive profitability and continuous improvement throughout the entire supply chain.

Ideally, PCM needs to be done early in the product development cycle, as early as the conceptual phase – design and supplier selection is much more flexible early in the process – so it is important to enable cost engineering during the front end of product development and ensure profitability with control over costs for parts and tooling.

Not everyone can optimize cost early, or not in all situations. PCM processes and tools may also need to be applied in later stages of the product lifecycle. Even when cost models and consultation based on facts get applied early in the lifecycle, there might be a need to do it several times over the lifecycle, so PCM needs to support the cost model across all corporate functions from product development to sales and establish a single consistent repository for estimating and communicating cost with repeatable processes and historical information. As PCM is spread over the product lifecycle, it’s important to take an enterprise-wide approach to costing. An ideal PCM system needs to align with the product development process managed in a PLM system, so there is lot of synergy between a PLM and PCM.

The most commonly used tools for PCM – spreadsheets and custom programs that conduct simple rollups – are not suitable for enterprise-class wide processes; these solutions do not provide the details required to develop credible cost models. They also make it very difficult for designers to compare products, concepts, and scenarios. Spreadsheets fail due to quality problems and the inability to implement them effectively on an enterprise scale, resulting in different product lines, geographies, or lines of business having different approaches. Non-enterprise approaches also make it difficult to reuse information or apply product changes, currency fluctuations, burden rates updates, or commodity cost changes

By extending an enterprise wide system like PLM for PCM functions, cost management is effectively communicated and captured to institutionalize it for future product programs.  This eliminates disconnected and inconsistent manual costing models, and complex difficult to maintain spreadsheets.  This also supports easy, fast, and reliable impact analysis to incorporate product changes accurately into costs with visibility to all cost factors and make these processes repeatable. The PCM process can also leverage the existing 3D model parametric data managed in PLM systems to extract the relevant parameters such as thickness, surface, and volume for the feature based calculations. Other PLM data that can be reused for PCM includes labor rates from engineering project management, material costs from material management modules, bill of materials/process and tooling involved with engineering and manufacturing data management. An integrated PLM and PCM solution is also important for efficiency and allowing companies to reuse both product data and cost models to facilitate continuous improvement over time .

In the next post of this series, I explain how the Siemens PLM Teamcenter suite supports PCM.

This post was originally created in January 2017.

With all the buzz about Additive Manufacturing, or 3D Printing, in the manufacturing world today, there is a lot of mystery and confusion surrounding the common practices and techniques. So, this week’s blog post will address a common type of 3D printing known as Powdered Bed & Inkjet 3D Printing (3DP).

What is Powdered Bed & Inkjet 3D Printing?

It is actually part of a broader category, commonly referred to as a Granular Based Technique. All granular based additive manufacturing techniques start with a bed of a powdered material. A laser beam or bonding agent joins the material in a cross section of the part. Then the platform beneath the bed of material is lowered, and a fresh layer of material is brushed over the top of the cross section. The process is then repeated until a complete part is produced. The first commercialized technique of this category is known as Selective Laser Sintering, though the main point of discussion here is Powdered Bed & Inkjet 3D Printing.

Invented in 1993 at Massachusetts Institute of Technology, it was commercialized by Z Corporation in 1995. This technology uses a powdered material, traditionally a plaster or starch, and is held together with a binder.  More materials are available now, such as calcium carbonate and powdered Acrylic.

Though 3DP is a granular (or powder) based technique, it does not use a laser to create a part. Instead, a glue or binder serves to join the part. It is also worth mentioning that this type of technique is where the term 3D Printing originated from, as it uses an Inkjet style printing head.

What Are the Advantages of this Process?

This process is one of the few Rapid Prototyping Techniques that can produce fully colored parts, through the integration of inks in the binders.

In addition, the material costs for this particular technique are relatively low, due to their wide commercial availability.

Because parts are created in a bed of material, there is no need to use support structures, like in other forms of rapid prototyping. This helps to prevent secondary operations and machining.

Another advantage of the material bed is the ability to stack multiple parts into the build envelope. This can greatly increase the throughput of a 3DP machine.

What Are the Disadvantages of this Process? […]

© Tata Technologies 2009-2015. All rights reserved.