Enterprise-wide PLMdata systems hold huge amounts of business data that can potentially be used to drive business decisions and effect process changes to generate added value for the organization.  Several PLM users are unaware of the existence of such valuable data, while for others, advanced data search and retrieval can feel like looking for a needle in a haystack due to their unfamiliarity with the PLM data model. Hence it is important to process the data into meaningful information and model it into actionable engineering knowledge that can drive business decisions for normal users. Reporting plays a key role in summarizing that large amount of data into a simple, usable format for the purpose of easy understanding.

Reporting starts with capturing the right data – the most important step and, many a time, the least stressed one. When data is not captured in the right format, it results in inconsistent or non-standard data.

Let’s take a simple Workflow process example: Workflow rejection comments are valuable information for companies to understand the repeated reasons for workflow rejection and to improve FTY (First time yield) by developing training plans to address them.  Users might not enter rejection comments unless they are made mandatory, so it’s important to have data-model checks and balances to capture the right data and standardize it through categorization and LOVs (List of values).

reportThe next step is to filter and present the right information to the right people. End Users typically want to run pre-designed reports and maybe slice or dice the data to understand it better. Business Intelligence Designers and Business analysts who understand the PLM Schema and their business relationships are the ones who design the report templates. Report design is sometimes perceived as an IT or software function, and as a result, enough business participation is not ensured, which can have an impact on the effectiveness of the reports for end users. It is important to have business participation from report identification to report design to report usage. Business process knowledge is the key in this area, not the PLM tool expertise alone.

Since business processes get improved/modified based on different market and performance trends derived from PLM reports, it’s important to have continuous improvement initiatives to fine-tune reporting based on these improved processes and new baselines, from data capture to presentation. That makes it a continuous cycle – business processes need to be designed to support reporting and reports need to help improve the process.

Properly designed reports provide increased visibility into shifting enterprise wide status, reduce time and cost for data analysis, ensure quicker response times and faster product launch cycles and improve product quality and integrity.

How do your reports measure up? Do you have any questions or thoughts? Leave a comment here or contact us if you’re feeling shy.

The Dassault Systèmes SIMULIA portfolio releases new versions of its software products every year, and this year is no different. The first release of Abaqus 2017 is now available for download at the media download portal. SIMULIA has developed and broadcast 2017 release webinars to make users aware of new features available in the 2017 release, but those webinars are long recordings ranging from one to two hours each, which can be daunting. This blog post will provide a brief highlight of standard and explicit updates in the Abaqus 2017 Solver. A more detailed explanation of any mentioned update, or answers to further questions, can be obtained either by listening to the webinar recordings at the SIMULIA 3DExperience user community portal, leaving a comment on this post, or contacting us.

Updates in Abaqus Standard

Abaqus Standard 2017 has been substantially improved with respect to contact formulations. Mentioned below are the key highlights of various contact functionalities improvements.

  • Edge to surface contact has been enhanced with beams as master definition. This new approach facilitates the phenomenon of twist in beams during frictional contact.
  • Cohesive behavior in general contact.

General contact has always been useful in situations where either it becomes cumbersome to visualize and define large number of contact pairs, even by using contact wizard, or it’s not possible to predict contact interactions based on initial configuration. The general contact support now includes cohesive behavior, thereby making it possible to define contact in situations shown in figure below.Image1

 

Cohesive contact does not constrain rotational degree of freedoms. These DOFs should be constrained separately to avoid pivot ratio errors.

There have been few other changes in cohesive contact interactions. In the 2016 release, only first time cohesive contact was allowed by default, i.e. either a closed cohesive behavior at initial contact or an open initial contact that could convert to a close cohesive contact only once. In the 2017 release, only a closed initial contact could maintain a cohesive behavior by default settings. Any open contact cannot be converted to cohesive contact later. However, it is possible to change the default settings.

Image1

 

  • Linear complementary problem

A new step feature has been defined to remove some limitations of perturbation step. In earlier releases, it was not possible to define a contact in perturbation step that changes its status from open to close or vice versa. In 2017 release, an LCP type technique has been introduced in perturbation step to define frictionless, small sliding contact that could change its contact status. No other forms of non-linearity can be supported in perturbation steps.  LCP is available only for static problems. Any dynamic step is not supported.

Image1

Updates in Abaqus XFEM (crack modeling) […]

Autodesk Vault offers several methods and workflows for finding files, understanding your data, and organizing your work.  Let’s take a look at each.

  1. Basic search – This search essentially searches the file name and all the properties where the “Basic Search” option is turned on in the properties administration area.  This can be a very broad search if you use a lot of different properties in Vault.  Many people will use this to see if they can start getting some relevant results, and then will use one of the other more advanced search types if the basic search returns too many results.basic-search
  2. Criteria search – This is my preferred search for narrowing down a set of results to just what I am looking for.  The criteria search lets you start with a basic search, but then lets you refine the results by entering values for specific properties.  As an example, you could search for a document created by a specific user, within the last 6 months, for a specific project, and with the word “collet” in one of the other properties.  This should result in a very targeted result, and works well when you remember that you created something a while back, but just can’t remember the file name or where it was saved.criteria-search
  3. Advanced search – The advanced search is even more structured than the criteria search.  It allows for the same level of refinement for results, but is in a more rigid interface in its own window. There are separate tabs for the basic search, advanced search and options.  Unfortunately, the basic and advanced options can’t be combined into a single search like with the criteria search.  The big advantage of the advanced search is in the ability to save a “search folder,” which I’ll explain next.advanced-find
  4. Search folders – Search folders are saved from the advanced search interface, and allow you to reuse search criteria in a very fast and easy to use manner.  When a search is saved as a folder, it shows up in the folder list on the left of the Vault client interface. You can simply pick on one of these search folders to display an updated list of all that search’s results. I commonly use this to display all the files I still have checked out to me.  As a manager, this is also a good folder to make sure other users are consistently remembering to check in their files as well.search-folders

The Dassault Systèmes SIMULIA portfolio releases new versions of its software products every year, and this year is no different. The first release of Abaqus 2017 is now available for download at the media download portal. In this blog post, I provide a brief highlight of updates in Abaqus CAE 2017. A more detailed explanation of any mentioned update, or answers to further questions, can be obtained either by listening to the webinar recordings at SIMULIA 3D Experience user community portal, leaving a comment on this post, or contacting us.

  • Consistency check for missing sections

Abaqus CAE users would probably agree that this mistake happens quite often, even though the parts with defined section assignments are displayed in a separate color. In previous releases, this check was not included in data check runs, so it was not possible to detect this error unless a full run was executed. In the 2017 release, missing regions can be identified in a data check run, thus saving time by eliminating fatal error runs.

image1

 

  • New set and surface queries in query toolset

The sets and surfaces can be created at part as well as assembly level. In earlier releases, it was not possible to see the content of a set or surface in the form of text, though it was possible to visualize the content in the viewport. In the 2017 release, query toolbox includes set and surface definition options. In case of sets, information about geometry, nodes, and elements can be obtained with respect to label, type, connectivity and association with part or instance, whichever is applicable. In case of surfaces, name, type, and association with instances, constraints, or interactions can be obtained.

image1

 

  • Geometry face normal query

In the 2017 release, it is possible to visualize the normal of the face or surface by picking it in the viewport. In case of planar faces, normal is displayed instantly. In case of curved faces, CAE prompts the user to pick a point location on the face by various options.

image1

[…]

417px-the_tortoise_and_the_hare_-_project_gutenberg_etext_19994In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead.

— Aristotle, Physics VI:9, 239b15

This paradox, as first developed by Zeno, and later retold by Aristotle, shows us that mathematical theory can be disproved by taking the hypothesis to an absurd conclusion.  To look at it another way, consider this joke:

A mathematician and scientist are trapped in a burning room.

The mathematician says “We’re doomed! First we have to cover half the distance between where we are and the door, then half the distance that remains, then half of that distance, and so on. The series is infinite.  There’ll always be some finite distance between us and the door.”

The engineer starts to run and says “Well, I figure I can get close enough for all practical purposes.”

The principle here, as it relates to simulation like FEA, is that every incremental step taken in the simulation process gets us closer to our ultimate goal of understanding the exact behavior of the model under a given set of circumstances. However, there is a limit at which we have diminishing returns and a physical prototype must be built. This evolution of simulating our designs has saved a lot of money for manufacturers who, in the past, would have had to build numerous, iterative physical prototypes. This evolution of FEA reminds me of…

2000px-mori_uncanny_valley-svgThe uncanny valley is the idea that as a human representation (robot, wax figures, animations, 3D models, etc.) increases in human likeness, the more affinity people will have towards the representation. That is, however, until a certain point.  Once this threshold is crossed, our affinity for it drops off to the point of revulsion, as in the case of zombies, or the “intermediate human-likeness” prosthetic hands.  However, as the realism continues to increase, the affinity will, in turn, start to rise.

Personally, I find this fascinating – that a trend moving through time can abruptly change direction, and then, for some strange reason, the trend reverts to its original direction. Why does this happen? There are myriad speculations as to why in the Wikipedia page that I’ll encourage the reader to peruse at leisure.

elmer-pump-heatequationBut to tie this back to FEA, think of the beginning of the Uncanny Valley curve as the start of computer assisted design simulation. The horizontal axis is time, vertical axis is accuracy.  I posit that over time, as simulating software has improved, the accuracy of our simulations has also increased. As time has gone on, the ease of use has also improved, allowing non-doctorate holders to utilize simulation as part of their design process.

And this is where we see the uncanny valley; as good as the software is, there comes a point, if you use specialized, intricate, or non-standard analysis, where the accuracy of the software falters. This tells us that there will still be needs for those PhDs, and once they get on the design and start using the software, we see the accuracy go up exponentially.

If you need help getting to the door, or navigating the valley, talk to us about our Simulation benchmark process. Leave a comment or click to contact us.

 

When considering an upgrade to a network deployment of software, there are a lot of steps involved.  Without a proper plan, significant disruption of engineering systems can occur.  Let’s take a look at a plan for upgrading an Autodesk network deployment of software.

Autodesk licenses (for those with an active contract) allow the use of the current version as well as the three previous versions.  The three version consideration for FlexLM actually involves the license files themselves and not the version of the license manager.  Here is some clarification:

  • When Autodesk issues a license file to a customer on subscription / maintenance, it will be for the current version (2017) and the three previous versions (2014-2016).  So when you request a NEW license file, you will be able to run any combination of 2014 to 2017 software with that NEW license file.
  • Old versions of the Autodesk Network License Manager often can’t read new license files.
  • New versions of the Autodesk Network License manager (FlexLM) can still read old license files.  This means that you can still use an existing license file (for your 2013-2014 software) while you are upgrading to newer software editions.  This is permitted for up to 30 days during a software transition.

Here are a set of steps that can be used to upgrade an Autodesk networked software environment (example for 2013 to 2017):

  1. Upgrade your license manager to one compatible with 2017 software while continuing to use your existing license file.
  2. Create software deployments for the 2017 versions and prepare to roll them out on workstations.
  3. Obtain and test (status enquiry) a new 2017 license file for use in the upgraded license manager (LMTOOLS to configure and verify).  For the time being, this license file will be a merged version of the previous license file and the new one.  This is done by simply copying the contents of the newly obtained license file into the existing one.  This will allow users to continue utilizing their existing version of 2013 software while the newer 2017 is deployed and tested.
  4. Roll out and test 2017 deployments on user’s workstations.  This can be done while leaving existing 2013 software on their workstations for production use during the transition.
  5. After testing of 2017 software is complete and rolled out to all users workstations, the old license file content (for 2013) will need to be removed from the merged and combined license file.  Once the old content is removed from the license file (keep a copy for reference), do a Stop, Start, Re-Read in LMTOOLS for the changes to take effect.  This step is critical to comply with the license agreement, and is a common oversight that gets companies in trouble in the case of a software audit (if they fail to disable the old software).  I would do this within 30 days of obtaining a 2017 license file to be safe.
  6. After you are sure there are no serious problems with 2017 on users workstations, the 2013 edition can be uninstalled.

Hopefully this adds some clarity to an often confusing process.

In the years to come, fuel efficiency and reduced emissions will be key factors in determining success within the transportation & mobility industry. Fuel economy is often directly associated with the overall weight of the vehicle. Composite materials have been widely used in the aerospace industry for many years to achieve the objectives of light weight and better performance at the same time.

The transportation & mobility industry has been following the same trends, and it is not uncommon to see the application of composites in this industry sector nowadays; however, unlike the aerospace industry, wide application of composites instead of metals is not feasible in the automotive industry. Hence, apart from material replacement, other novice methods to design and manufacture lightweight structures without compromise in performance will find greater utilization in this segment. In this blog post, I will discuss the application of TOSCA, a finite element based optimization technology.

The lightweight design optimization using virtual product development approach is a two-step process: concept design followed by improved design.

Design concept: The product development costs are mainly determined in the early concept phase. The automatic generation of optimized design proposals will reduce the number of product development cycles and the number of physical prototypes; quality is increased and development costs are significantly reduced. All you need is the definition of the maximum allowed design space – Tosca helps you to find the lightest design that fits and considers all system requirements. The technology associated with the concept design phase is called topology optimization that considers all design variables and functional constraints in optimization cycle while chasing the minimum weight objective function. The technique is iterative that often converges to a best optimal design.

HOW IT WORKS

The user starts with an initial design by defining design space, design responses, and objective function. Design space is the region from where material removal is allowed in incremental steps and objective function is often the overall weight of the component that has to be optimized. With each incremental removal of material, the performance of the component changes. Hence each increment of Tosca is followed by a finite element analysis to check existing performance against target performance. If target performance criteria is satisfied, the updated design increment is acceptable and TOSCA proceeds to the next increment. This process of incremental material removal is continued until the objective function is satisfied or no further design improvement is feasible. The image below depicts a complete CAD to CAD process flow in Tosca. The intermediate processes include TOSCA pre-processing, TOSCA and a finite element code based co-simulation and TOSCA post processing.

Tosca workflow

During the material removal process, TOSCA may be asked to perform the optimization that provides a feasible solution not only from a design perspective but from a manufacturing perspective as well. For example, TOSCA may be asked to recommend only those design variations that can be manufactured using casting and stamping processes. This is possible by defining one or more of manufacturing constraints available in TOSCA constraints library.

manufacturing constraints

While the topology optimization is applicable only on solid structures, it does not mean TOSCA cannot perform optimization on sheet metal parts. The sizing optimization module of TOSCA allows users to define thickness of sheet metal parts as design variables with a lower bound and an upper bound. […]

With all the buzz about Additive Manufacturing, or 3D Printing, in the manufacturing world today, there is a lot of mystery and confusion surrounding the common practices and techniques. So, this week’s blog post will address a common type of 3D printing known as Selective Laser Sintering (SLS).

But first, What is Additive Manufacturing?

Additive manufacturing is the process of creating a part by laying down a series of successive cross-sections (a 2D “sliced” section of a part). It came into the manufacturing world about 35 years ago in the early 1980s, and was adapted more widely later in the decade. Another common term used to describe additive manufacturing is 3D Printing. A term which originally referred to a specific process, but is now used to describe all similar technologies.

Now that we’ve covered the basics of 3D Printing, What is Selective Laser Sintering?

It is actually part of a broader category, commonly referred to as a Granular Based Technique. All granular based additive manufacturing techniques start with a bed of a powdered material. A laser beam or bonding agent joins the material in a cross section of the part. Then the platform beneath the bed of material is lowered, and a fresh layer of material is brushed over the top of the cross section. The process is then repeated until a complete part is produced. The first commercialized technique of this category is known as Selective Laser Sintering.

The Selective Laser Sintering Technique was developed in the mid-1980s by Dr. Carl Deckard and Dr. Joseph Beaman and the University of Texas at Austin, under DARPA sponsorship. As a result of this, Deckard and Beaman established the DTM Corporation with the explicit purpose of manufacturing SLS machines.  And, in 2001, DTM was purchased by their largest competitor, 3D systems.

What Are the Advantages of this Process?

SLS is quick. It’s one of the fastest rapid prototyping techniques. Though, relatively speaking, most techniques are fast. SLS also has the widest array of usable materials. Theoretically, just about any powdered material can be used to produce parts. In addition, it can potentially be one of the most accurate rapid prototyping processes – the major limiting factor being the particle size of the powdered material.

Because parts are created in a bed of material, there is no need to use support structures like in other forms of rapid prototyping. This helps to avoid secondary operations and machining. Another advantage of the material bed is the ability to stack multiple parts into the build envelope. This can greatly increase the throughput of a SLS machine.

What Are the Disadvantages of this Process?

Of the commercially available rapid prototyping machines, those that use the Selective Laser Sintering technique tend to have the largest price tag. This is usually due to the scale production these machines are designed for, making them much larger than others.

SLS can be very messy. The material used is a bed of powdered material and, if not properly contained, will get EVERYWHERE. In addition, breathing in powdered metals and polymers can potentially be very hazardous to one’s health; though most machines account for this, it is certainly something to be cognizant of when manufacturing.

Unlike other manufacturing processes, SLS limits each part to a single material. This means parts printed on SLS machines will be limited to those with uniform material properties throughout.

As materials aren’t fully melted, full density parts are not created through this process. Thus, parts will be weaker than those created with traditional manufacturing processes, although full density parts can be created through similar manufacturing processes, such as SLM.

In Conclusion

There are quite a few different ways to 3D print a part, with unique advantages and disadvantages of each process. This post is part of a series, discussing the different techniques. Thanks for reading!

It always amazes me, the sheer complexity of the task.  We must take a detailed engineering design, start with a simple block of metal, and through the application of pressure and process, whittle that block down to a functional product, accurate to within microns.

cam_isv_3

In order to accomplish this feat more efficiently and bring the cost/part down, CNC Machine Tools have added more of everything in recent years. They have become more powerful, allowing for higher cutting speeds that require advanced feed-rate controls to make effective.  They have also become more dynamic, with 5-Axis Mills and multi-spindle, multi-turret Mill-turn machines offering opportunities to minimize part setups, increase accuracy, and reduce overall machining time.

They have, in short, become more complex.  And with that complexity comes additional expense.  With machines that routinely cost multiple hundreds of thousands, if not millions of dollars, the reality of the situation is that a machine collision is just not an option.

There are so many capabilities and options available on a modern NC Machine tool that ensuring that the machine is properly programmed to do what is expected becomes a monumental task.  You need a powerful programming tool to help you create the paths, controlling the cutting tool axis, speeds, engagements and retracts so as to efficiently and accurately machine the product.

Those paths, when initially reviewed by the CAM software, may look feasible from the context of the tool, but upon generating the code and loading it into the controller, often there are motions that are either positional in nature (rotating the part to align the tool), or controller specific (ex. Go home moves) that create collisions with objects such as fixtures or the part, or that require movement beyond the machine’s axis limitations. […]

In this blog post, we will look into the basics of surface development and gain an understanding of what continuity is. Years ago when I used to teach full time I would tell my students that I called it “continue-ity,” the reason being that you are essentially describing how one surface continues or flows into another surface. Technically, you could describe curves and how they flow with one another as well. So let’s get started.

G0 or Point Continuity is simply when one surface or curve touches another and they share the same boundary.  In the examples below, you can see what this could look like on both curves and surfaces.

G0 Continuity

G0 Continuity

 

G0 Curve Continuity

G0 Curve Continuity

As we progress up the numbers on continuity, keep in mind that the previous number(s) before must exist in order for it to be true. In other words, you cant have G1 continuity unless you at least have G0 continuity. In a sense, it’s a prerequisite.  G1 or Tangent continuity or Angular continuity implies that two faces/surfaces meet along a common edge and that the tangent plane, at each point along the edge, is equal for both faces/surfaces. They share a common angle; the best example of this is a fillet, or a blend with Tangent Continuity or in some cases a Conic.  In the examples below, you can see what this could look like on both curves and surfaces. […]

© Tata Technologies 2009-2015. All rights reserved.