Typically when new software releases come out, there are always a few really key improvements that really stand out.  Many times, it is a cool new modeling feature, or maybe an entirely new approach to design.  In Inventor, this might be like the addition of Freeform Modeling or Direct Editing as examples.  Unfortunately these are features or techniques that might not be applicable to many users.

If you are using both Autodesk Inventor and Vault together however, you should probably pay attention to this one:  The Vault status icons in the “recently used” area.  These icons now clearly identify the current Vault status of one of your recent files when in the Inventor “Open” dialog box.  Is the file checked out?  Is the file checked in, and up to date in my workspace? Has someone else modified the file since I last worked on it?  Have I checked in my latest development ideas or new parts yet?  All of these can be determined simply by noticing the Vault status bubbles in the “Open” dialog box.

Vault Status Icons

This is a further  followup to my previous articles on digital twins, focusing on the Feedback Loop pillar

Smart Factory loop
The feedback loop starts with the Smart Factory. This is a fully digitalized factory model of a production system connected via sensors, SCADA systems, PLCs or other automation devices to the main product lifecycle management (PLM) data repository. In the Smart Factory, all events on the physical shop floor during production are recorded and directly pushed back to the PLM system or through the cloud. Artificial intelligence (AI) technology is used to study and analyze this information, and the main findings are sent back to either product development
in manufacturing planning or facility planning.
Why is this important? Production facilities and the manufacturing processes tends to change immediately after start of production. New ideas will be implemented, new working methods will be deployed and new suppliers might be selected; all requiring changes to the production system or process. Since these modifications will certainly impact the future, updating them in the system at this stage is becoming a must. Production systems outlive the product lifecycle, and many companies use their production systems to make multiple products. These factors contribute to the increasing need to regularly capture these changes in the PLM system, which can later be used to distribute this information to all parties. The information collected during production can also serve as the basis for improving the maintainability of manufacturing resources. With this information, we can enable much better (sensor) condition-based maintenance, and thus increase uptime and productivity.

Smart product loop
Almost every product made today is a smart product. Many companies are looking for ways to improve the connection with their smart products while they are being used by their customers. Monitoring product use can provide a lot of knowledge for improving products. More than that, connecting to these smart products can generate a new type of business model that may result in more competitive offerings.

PLM Challenge

These feedback loops and the data it generate is a challenge for PLM too. In the short term, the PLM issue for digital twins is how IoT-gathered data can best be put to work—extrapolated, parsed, and redirected? To where? At whose direction? The quick and easy solutions are analytics running on the cloud, machine-to-machine (M2M), and analyses based on Artificial Intelligence (AI). Such questions are expected as digital twins emerge as the next revolution in both data management and lifecycle management.
Ultimately, the use of PLM will allow us to bring digital twins into close correspondence—in sync—with their physical equivalents in the real world. When this comes to pass, we can expect problems to be uncovered more quickly, products to be supported. Products with digital twins will be more reliable with less downtime while operating more efficiently and at lower cost. PLM-powered digital twins will boost user and owner confidence in their physical products. Ultimately, digital twins reflect what users and owners expect to receive when they sign a contract or purchase order.

A classic deployment of a digital twin includes three pillars: product design, manufacturing process planning and feedback loops.

  1. Product design

A digital twin includes all design elements of a product, namely:

  • 3D models using computer-aided design (CAD) systems
  • System models (using systems engineering product development solutions, such as systems-driven product development)
  • Bill-of-materials (BOM)
  • 1D, 2D and 3D analysis models using computer-aided engineering (CAE) systems such as Simcenter™ software
  • Digital software design and testing using application lifecycle management (ALM) systems such as Polarion ALM software
  • Electronics design using systems developed by Mentor Graphics

Using these elements results in a comprehensive computerized model of the product, enabling almost 100 percent of virtual validation and testing of the product under design. All of this eliminates the need for prototypes, reduces the amount of time needed for development, improves the quality of the final manufactured product and enables faster iterations in response to customer feedback.

  1. Manufacturing process planning

The Siemens solutions available today enable the development of three models critical to any manufacturer:

  • Manufacturing process model – the how – resulting in an accurate description of how this product will be produced
  • Production facility model – the where – providing a full digital representation of the production and assembly lines needed to make the product
  • Production facility automation model – Describing how the automation system, including supervisory control and data acquisition (SCADA) systems, programmable logic controller (PLC) hardware and software, human-machine interface (HMI) hardware and software, etc., will support the production system

The value of the digital twin in manufacturing offers a unique opportunity to virtually simulate, validate and optimize the entire production system. It also lets you test how the product, with all its primary parts and subassemblies, will be built using manufacturing processes, production lines and automation.

  1. Feedback loops

When it comes to the feedback loops of the Digital Twins pillars , there are two kinds that have a significant impact on most manufacturers

  • The Smart Factory Loop and
  • The Smart Product Loop.

“Product Design” and “Manufacturing process planning” pillars were in existence for  a while but the “Feedback loops” is a newer one. I will discuss elaborately on it in my next blog .

Any complete FEA solution has at-least three mandatory components: Pre-Processor, solver and post-processor. If you compare it with an automobile, solver is the engine that has all the steps/solution sequences to solve the discretized model. It can be regarded as the main power source of a CAE system. The pre-processor is a graphical user interface that allows user to define all the inputs into the model such as geometry, material, loads and boundary scenarios etc. In our automobile analogy, pre-processor can be regarded as the ignition key without which it is not possible to utilize the engine (solver) efficiently. The post-processor is a visualization tool to make certain conclusion from requested output: either text or binary. A good CAE workflow is regarded as one that offers closed loop CAD to CAD data transfer.

The above workflow is not closed so there is no scope of model update. Any changes in design requires all the rework. This has been the traditional workflow in organizations that have completely disconnected design and analysis departments. Designers send the CAD data to analysts who perform FEA in specialized tools and submit the product virtual performance report back to designers. If a change is mandatory, FEA is performed manually all over again. Let’s look at a better workflow.

In this workflow, if the initial design does not meet the design requirements, it is updated and sent to the solver, not to the pre-processor. It means that all the pre-processing steps are mapped from old design to new design without any manual intervention. This is an effort to bridge the gap between design and analysis departments that has been embraced by the industry so far. The extent to which the GAP can be bridged depends on the chosen workflow but to some extent, almost every CAE company has taken an initiative to introduce products that bridge this GAP. Let’s discuss in context of Dassault Systemes and Siemens.

Dassault Systemes: After acquiring Abaqus Inc in 2005, Dassault Systemes rebranded it as SIMULIA with the objective of giving users access to simulation capabilities without requiring the steep learning curve of disparate, traditional simulation tools. They have been introducing new tools to meet this objective.

  • The first one in series was Associative interfaces for CATIA, Pro-E and Solidworks which is a plug-in to Abaqus CAE. With this plug-in it is possible to automatically transfer the updated data from above mentioned CAD platforms to Abaqus CAE with a single click. All the CAE parameters in Abaqus CAE are mapped from old design to updated design. It’s a nice way to reduce re-work but design and simulation teams are still separate in this workflow.
  • Next initiative was SIMULIA V5 in which Abaqus was introduced in CATIA V5 as a separate workbench. This workbench includes additional toolbars to define Abaqus model and generate Abaqus input file from within CATIA. Introduce Knowledge ware, and user has all the nice features to perform DOE’s and parametric studies. This approach brings designers and analysts with CATIA experience under one roof.
  • Next Dassault Systemes introduced SIMULIA on 3D Experience platform allowing analysts to utilize data management, process management and collaboration tools with Abaqus in the form of simulation apps and roles. The solution is now in a mature stage with incorporation of process optimization, light weight optimization, durability and advanced CFD tools. By merging SIMULIA with BIOVIA we are also talking about multi scale simulation from system to molecular level. It is further possible to perform the simulation and store the data on public or private cloud.

Siemens PLM solutions: Siemens traditional CAE tools include FEMAP user interface and NX Nastran solver. Both have been specialized tools primarily meant for analysts with little or no connectivity to CAD. More specialized and domain specific tools were added with the acquisition of LMS and Mentor Graphics.

  • In 2016 Siemens introduced its new Simulation solutions portfolio called as Simcenter that includes all Siemens simulation capabilities that can be integrated with NX environment. The popular pre-processor in Simcenter series is NX CAE that has bi-directional associativity with NX CAD. Though meant for specialists, NX CAE offers a closed loop workflow between NX CAD and NX Nastran thus making easier to evaluate re-designs and perform DOE’s.
  • Siemens also offers NX CAE add-on environments for Abaqus and Ansys thereby allowing analysis to efficiently incorporate these solvers in their NX design environment.
  • It is further possible to use Simcenter solutions with Siemens well known PLM solution Teamcenter for enterprise wide deployment of Siemens simulation tools.

This shift in approach is not limited to Dassault Systemes and Siemens. Every organization in this space be it Ansys, Autodesk or Altair are introducing such closed form solutions. One reason may be the recent acquisition of many CAE companies by bigger organizations such as Dassault, Siemens and Autodesk. Nevertheless, the change has been triggered and it will continue.

 

 

Autodesk University Session: 60 Tips in 60 Minutes – Autodesk Inventor 2018 Quick Tips

Whether new to Inventor software or a seasoned pro, you’ll learn something from this fast-paced course that will highlight 60 Inventor tips in 60 minutes. We’ll showcase some of the less obvious commands or features and their location within the Inventor environment. Along the way we’ll look at how some of the tips work and how they might help you in your daily designing. So buckle up—we’ve got a lot to cover and only 60 minutes to get it done.

Find out more about Tim’s Autodesk University Session:  Autodesk University Session Registration

With an i GET IT subscription, login at https://myigetit.com to view the upcoming live technical sessions and recordings, including Tim’s Autodesk 2018 Quick Tips session recording.

 

About i GET IT Online Training Management for Engineers

i GET IT is an online engineering knowledge development and sharing tool, which specifically addresses the engineering community with an extensive MCAD/PLM training library, powerful customization tools, learning management features and assessment capabilities.

Unlike other generic learning providers, i GET IT is created by dedicated resources from industry PLM leaders at Tata Technologies. This allows us to offer the most comprehensive training solution for the leading engineering design and manufacturing applications plus industry skills, providing a consistent and updated offering for each release. It also allows i GET IT to consult directly with customers, providing customized solutions that fit your exact training needs and beyond.

So how does your company handle the training and skill advancement needs of your engineers?  Realize your design potential at https://myigetit.com

 

There is an interesting news regarding CATIA to be shared by composites user community. While almost all the composites related functionalities such as composites design by zones/plies, ply drop offs, core sampling, ply producibility, ply flattening, ply cut outs, lay-up export etc. have been existing as native CATIA offerings in composites workbenches, one valuable piece has been missing. That piece is called Laser Projection, a tool that can assist manufacturing guys in placing cut plies at right location on the tool. Earlier this functionality was offered through one of Dassault Systemes software partner called Majestic. However, Majestic got acquired by Autodesk a while ago so Dassault Systemes decided to develop a similar functionality in-house.

Laser Projection functionality was introduced in V5-6R 2016 release of CATIA, both in classic as well as in Express configurations and has been refined in service packs such as V5-6R 2016 SP2 and SP3. In classic configuration license is named as CLA and in express configuration license is named as LPX. Either CATIA composites design or manufacturing workbenches are a pre-requisite in either of these configurations. This technology is most suitable for most hand-layup parts such as panels, hulls, wind blades etc.

Within the application, it is possible to define any number of lasers by coordinates and assign properties to them such as its dimensions and range in terms of distance, horizontal and vertical angles. It is also possible to optimize the resource allocation. The reach envelope can be visualized to make sure largest ply in the model can be displayed with given number of lasers in the model. If not, more lasers can be defined or their positions can be changed.

The Laser Projection module is compatible with most commercial available vendor machines such as Virtek, LAP, LPT etc. The core thickness as well as plies thickness is automatically taken into account during projection. It is also possible to change display properties such as laser color, length of normal vectors etc. It is further possible to include additional geometry or text as a part of the display from predefined CATIA sets.

For any further information regarding licensing or functionality of this module, including a demonstration, please approach us and we are ready to help. It is also possible to import the laser projection files such as .py and .cal extensions to review the laser projections data in CATIA laser projection.

According to a PLM Foresight Opinion Poll conducted by CIMData, 70% of Senior Executives in manufacturing companies saw little or no value in PLM. This is a troubling statistic as it shows that PLM is not as widely adopted and embraced as it should be. PLM can bring huge efficiency gains to an organization and prevent a lot of errors.

How can you get efficiency from PLM?  One approach is to use a Maturity Assessment. These models investigate issues related to best practices of the system been evaluated using defined “pillars” of major functionality. When their maturity of a given “pillar” is evaluated and measured, this provides current and future capability levels and can be used to identify potential improvements and goals for the system been evaluated. When a maturity model is applied to and shared by a number of organizations in a particular industry, it can provide an industry-specific benchmark against which to evaluate an organization’s maturity with respect to others in the same industry.

So why should a company assess the maturity of their PLM?  This assessment can guide companies to a PLM roadmap that will enable them to improve the state of their current sytem. The roadmap will allow them to deploy appropriate technologies, processes, and organizational changes that enhance the overall product development process from early concept through product end of life. This in turn leads to improved bottom-line and top-line benefits.

Tata Technologies have developed PLM Analytics as a framework to provide information to customers about the state of PLM within their enterprise, the maturity (ability to adopt and deploy) of PLM, and ultimately to the creation of a detailed PLM roadmap that will support their business strategy and objectives. Each component builds on and complements the other components, but can be conducted independently.

What is PLM Analytics?  A high level diagram is shown below:

PLM Benchmark  Helps triage your PLM needs and find out how you stack up against the competition. The resulting report benchmarks performance against 17 industry-standard “pillars” and evaluates the current state and desired future state with an indication of implementation priority. The Benchmark is a consultant-led personal interview with one or more key business leaders. It is toolset agnostic.

PLM Impact Builds a business case and demonstrates how PLM can save you money. Once a PLM Benchmark has been completed, a consultant-led series of interviews with multiple key business leaders can identify multiple savings opportunities. These opportunities are summarized as financial metrics, Payback and ROI. These can be used to provide validation of proposed PLM initiatives and provide decisions makers with key financial data.

PLM Healthcheck  Understand how your current PLM works from the people who use it. The healthcheck surveys a cross-section of your product design team via online assessments to establish PLM health related to organization, processes, and technology. The results identify gaps against best practices, consistency of organizational performance, and prioritize areas of improvement. The healthcheck can be used on a global basis.

PLM Roadmap  A 360°view of your PLM plus a detailed roadmap for pragmatic implementation. The roadmap is constructed from onsite interviews with senior leaders, middle management and end users across product development. These sessions focus on the specific business processes and technologies to be improved and result in a PLM Roadmap, an actionable improvement plan with defined activities and internal owners to ensure successful implementation

By using this suite of tools, Tata Technologies can put you on the road to PLM success!

You can’t get away from it; IoT, the Internet of Things.  Connected devices everywhere.  It is estimated that there will be over 50 billion connected devices by 2020.  At this rate, our toasters will be connected to the internet! From your smartphone, you can control the heat level, monitor temperature and look for cool spots, watch the carbohydrates carbonize in real time!

While we may not actually see the connected toaster (at least I hope not), many companies are looking into their own IoT strategies.

Manufacturers are no different.  Industrial IoT (IIoT) is helping manufacturers learn more about their own products.  All this information is helping to create brand new business models, selling outcomes instead of products.  GE is probably the most notable example of this, where they sell flight hours instead of jet engines.  With an immense amount of data generated by sensors on these engines, GE is able to predict failures and analyze performance to fine tune operation.

The question now, is not how can a company utilize this type of operational feedback, but when.  Manufacturers will have to understand how that data fits into their systems engineering.  As of now, this would require a company to apply the digital thread and digital twin philosophy.  In short, the digital twin first accounts for the PLM’ish information leading up to delivery (requirements, simulations, designs, manufacturing processes, etc.).  Then, each physical product will have an instance of the digital twin to record what happens after delivery.  This includes as-built conditions, service modifications, and, here is the IoT connection, operational parameters.

The market may not yet be ready for this level of digitalization (a term I first heard here, at least in this context).  So many companies are still using shared drives and spreadsheets as the PLM tools of choice.  What we discovered is that if a company wants to have a IoT strategy, they have to understand what that entails.  Will the company be ready to change their culture fast enough?  IIoT is a great feedback loop.  However, there has to be something to feedback all that data to.  The promise is there; I think people understand the benefits.

To answer the original, I think PLM may be ready for it, as it matures and incorporates systems engineering, simulation-in-the-loop, and other enabling technologies.  However, I don’t think the market is ready for the marriage of IIoT and PLM.

In the FEA solver world, users come across multiple numerical schemes to solve the formulated stiffness matrix of the problem. The most popular ones among all are the implicit and explicit solvers. In Abaqus terminology they are called a standard solver and explicit solver respectively. Each of these schemes has its own merits and demerits and this blog post compares these two schemes based on several parameters.

For ease of understanding, I am avoiding the use of long and complicated mathematical equations in this post. 😉

        Implicit Scheme

From an application perspective, this scheme is primarily used for static problems that do not exhibit severe discontinuities. Let’s take an example of the simplest problem: Linear static in which any physical situation can be mathematically formulated as:

[K]{x}={F}

Here K is the stiffness matrix, x is the displacement vector and F is the load vector. The size of the matrix and vectors can vary depending on the dimensionality of the problem. For example, K can be a 6×6 matrix for a 3D continuum problem or a 3×3 matrix for a 2D structural problem. The composition of K is primarily governed by material properties. F primarily includes forces and moments at each node of the mesh. Now, to solve the above equation for x, matrix K should be inverted or inversed. After inversion, we get a displacement solution used to compute other variables, such as strains, stresses, and reaction forces.

[M]d2{x}/dt2+[C]d{x}/dt+[K]{x}={F}

The Implicit scheme is applicable to dynamic problems as well. In the above equation, M is mass matrix, C is damping matrix and the rest are as usual. This equation is defined in real time. Backward Euler time integration is used to discretize this equation in which the state of a system at a given time increment depends on the state of the system at later time increment. K matrix inversion takes place in a dynamic scenario as well because the objective is still to solve for x. Abaqus standard solver uses three different approaches to solve implicit dynamic problems: quasi static, moderate dissipation or transient fidelity. Each method is recommended for specific types of non-linear dynamic behavior. For example, the quasi static method works well in problems with severe damping.

Merits of this scheme

  • For linear problems, in which K is a constant, implicit scheme provides solution in a single increment.
  • For non-linear problems, in which K is a function of x, thereby making it necessary to solve problem in multiple increments for sake of accuracy, size of each increment can be considerably large as this scheme is unconditionally stable.

Due to these reasons, implicit scheme is preferred to simulate linear/non-linear static problems that are slow or moderate in nature with respect to time.

Demerits of this scheme […]

In a world of controlled, strict, non-flexible systems, people start to get creative.  For some, it’s the crushing weight of a massively customized ERP system that somehow spread out to every part of the organization, for others circumventing “inconvenient” safety devices; when things need to get done, sometimes we have to take matters into our own hands.  In the IT world, this is called “Shadow IT,” which is basically any app, software, or program that isn’t under the control (or even known to) the IT department. Users downloading freeware, buying their own software, using cloud services, etc. Even NASA has difficulty reining in their employees when it comes to using non-sanctioned software.

This behavior extends into the design and engineering office as well, perhaps moreso than other parts of an average organization. It’s in their nature to solve problems; it’s kind of the key attribute of an engineering job description! I can understand why – engineers live and breathe efficiency, and being over-encumbered by poorly designed systems is not efficient at all.

Case in point: the Bill of Materials (BOM). How many systems are controlling it? ERP? The design tool? PDM?  Some home-grown custom system?  By and large…no.  Most work-in-process BOMs are done in spreadsheets.  And why not?  Spreadsheet applications are easy to use and share, don’t require much training, and don’t require six levels of approval to implement. Spreadsheets can even be automated with macros, making BOMs configurable and intelligent. Eventually all items do end up in the ERP, though typically not until late in a product’s/project’s/program’s lifecycle.

So, why not stay with the spreadsheets? What if someone edits those macros and an unknowing user doesn’t verify the end result? What if the file gets corrupted? How does the rest of the organization gain visibility to the latest changes? What ensures that the BOM is up to date with the design? Ultimately, the BOM should be controlled in a PLM system, much to chagrin of clever engineers everywhere.  Here’s why.

Just as when the market moved from pencil drawings, french curves, vellums, and other manual drafting techniques to 2D CAD tools, and similarly 2D design to 3D modeling: Change.  Yes, sketching a part is faster than creating the lines and circles – but CAD technology enables updates caused by change much faster than a manual process. The gains of going from 2D designs to 3D models are more staggering.  “You mean one edit and it updates everywhere?  Even the drawing?”  Anecdotally, I had a customer say “With 2D, whenever the shop called, I was worried about what we messed up.  Now, with 3D models, when the shop calls, I worry about what they messed up.

Again, it’s about rate of change, propagation and validation of the change. Spreadsheets cannot do that (unless you have some really wicked cool macros).

With our PLM Analytics Benchmark, we can help your company to assess the nature of your BOM needs, as well as the 16 pillars of PLM. Let us know if we can be of service!

© Tata Technologies 2009-2015. All rights reserved.