Category "Tips & Tricks"

When considering an upgrade to a network deployment of software, there are a lot of steps involved.  Without a proper plan, significant disruption of engineering systems can occur.  Let’s take a look at a plan for upgrading an Autodesk network deployment of software.

Autodesk licenses (for those with an active contract) allow the use of the current version as well as the three previous versions.  The three version consideration for FlexLM actually involves the license files themselves and not the version of the license manager.  Here is some clarification:

  • When Autodesk issues a license file to a customer on subscription / maintenance, it will be for the current version (2017) and the three previous versions (2014-2016).  So when you request a NEW license file, you will be able to run any combination of 2014 to 2017 software with that NEW license file.
  • Old versions of the Autodesk Network License Manager often can’t read new license files.
  • New versions of the Autodesk Network License manager (FlexLM) can still read old license files.  This means that you can still use an existing license file (for your 2013-2014 software) while you are upgrading to newer software editions.  This is permitted for up to 30 days during a software transition.

Here are a set of steps that can be used to upgrade an Autodesk networked software environment (example for 2013 to 2017):

  1. Upgrade your license manager to one compatible with 2017 software while continuing to use your existing license file.
  2. Create software deployments for the 2017 versions and prepare to roll them out on workstations.
  3. Obtain and test (status enquiry) a new 2017 license file for use in the upgraded license manager (LMTOOLS to configure and verify).  For the time being, this license file will be a merged version of the previous license file and the new one.  This is done by simply copying the contents of the newly obtained license file into the existing one.  This will allow users to continue utilizing their existing version of 2013 software while the newer 2017 is deployed and tested.
  4. Roll out and test 2017 deployments on user’s workstations.  This can be done while leaving existing 2013 software on their workstations for production use during the transition.
  5. After testing of 2017 software is complete and rolled out to all users workstations, the old license file content (for 2013) will need to be removed from the merged and combined license file.  Once the old content is removed from the license file (keep a copy for reference), do a Stop, Start, Re-Read in LMTOOLS for the changes to take effect.  This step is critical to comply with the license agreement, and is a common oversight that gets companies in trouble in the case of a software audit (if they fail to disable the old software).  I would do this within 30 days of obtaining a 2017 license file to be safe.
  6. After you are sure there are no serious problems with 2017 on users workstations, the 2013 edition can be uninstalled.

Hopefully this adds some clarity to an often confusing process.

In the years to come, fuel efficiency and reduced emissions will be key factors in determining success within the transportation & mobility industry. Fuel economy is often directly associated with the overall weight of the vehicle. Composite materials have been widely used in the aerospace industry for many years to achieve the objectives of light weight and better performance at the same time.

The transportation & mobility industry has been following the same trends, and it is not uncommon to see the application of composites in this industry sector nowadays; however, unlike the aerospace industry, wide application of composites instead of metals is not feasible in the automotive industry. Hence, apart from material replacement, other novice methods to design and manufacture lightweight structures without compromise in performance will find greater utilization in this segment. In this blog post, I will discuss the application of TOSCA, a finite element based optimization technology.

The lightweight design optimization using virtual product development approach is a two-step process: concept design followed by improved design.

Design concept: The product development costs are mainly determined in the early concept phase. The automatic generation of optimized design proposals will reduce the number of product development cycles and the number of physical prototypes; quality is increased and development costs are significantly reduced. All you need is the definition of the maximum allowed design space – Tosca helps you to find the lightest design that fits and considers all system requirements. The technology associated with the concept design phase is called topology optimization that considers all design variables and functional constraints in optimization cycle while chasing the minimum weight objective function. The technique is iterative that often converges to a best optimal design.

HOW IT WORKS

The user starts with an initial design by defining design space, design responses, and objective function. Design space is the region from where material removal is allowed in incremental steps and objective function is often the overall weight of the component that has to be optimized. With each incremental removal of material, the performance of the component changes. Hence each increment of Tosca is followed by a finite element analysis to check existing performance against target performance. If target performance criteria is satisfied, the updated design increment is acceptable and TOSCA proceeds to the next increment. This process of incremental material removal is continued until the objective function is satisfied or no further design improvement is feasible. The image below depicts a complete CAD to CAD process flow in Tosca. The intermediate processes include TOSCA pre-processing, TOSCA and a finite element code based co-simulation and TOSCA post processing.

Tosca workflow

During the material removal process, TOSCA may be asked to perform the optimization that provides a feasible solution not only from a design perspective but from a manufacturing perspective as well. For example, TOSCA may be asked to recommend only those design variations that can be manufactured using casting and stamping processes. This is possible by defining one or more of manufacturing constraints available in TOSCA constraints library.

manufacturing constraints

While the topology optimization is applicable only on solid structures, it does not mean TOSCA cannot perform optimization on sheet metal parts. The sizing optimization module of TOSCA allows users to define thickness of sheet metal parts as design variables with a lower bound and an upper bound. […]

In this blog post, we will look into the basics of surface development and gain an understanding of what continuity is. Years ago when I used to teach full time I would tell my students that I called it “continue-ity,” the reason being that you are essentially describing how one surface continues or flows into another surface. Technically, you could describe curves and how they flow with one another as well. So let’s get started.

G0 or Point Continuity is simply when one surface or curve touches another and they share the same boundary.  In the examples below, you can see what this could look like on both curves and surfaces.

G0 Continuity

G0 Continuity

 

G0 Curve Continuity

G0 Curve Continuity

As we progress up the numbers on continuity, keep in mind that the previous number(s) before must exist in order for it to be true. In other words, you cant have G1 continuity unless you at least have G0 continuity. In a sense, it’s a prerequisite.  G1 or Tangent continuity or Angular continuity implies that two faces/surfaces meet along a common edge and that the tangent plane, at each point along the edge, is equal for both faces/surfaces. They share a common angle; the best example of this is a fillet, or a blend with Tangent Continuity or in some cases a Conic.  In the examples below, you can see what this could look like on both curves and surfaces. […]

Space: the final frontier!

…at least that is how I am beginning to feel as design software and its features evolve. In this post, I want to talk about the basics – specifically the basics of component design.

The age-old question will arise at times: do I begin the design at 0,0,0 or do I design the component in its assembly position? Does it matter? Well, yes and no. With most CAD software packages, you have the ability to constrain or mate the feature to the component it is mating to. So technically, almost every component can be designed at 0,0,0 and then just assembled when you are done, as long as you have a mating condition to work with. This method is typically referred to as Bottom Up design. You see this most often in design of off-the-shelf items you would basically plug and play as needed, e.g. Fasteners, Tubing, Brackets, etc.

Fasteners

Fasteners

The alternative to this type of design is when you have a group of components that don’t necessarily mate together but need to come into the correct assembly position every time they are inserted. This method is typically referred to as Top Down design.  In the Automotive realm of design, all of the body panels are designed using a top down method.  Generally you will hear the term “designed in body position,” which indicates it is a top down design.

The key to working on a top down design is that every component is designed using a common axis system, aka common 0,0,0 location. The major systems in a vehicle that are used in other vehicles as well will be developed using a common axis system that won’t be the vehicle axis system.  For example, an engine would maybe have an axis system built at the rear face of the block and the centerline of the crank. […]

How many times has the first design iteration submitted to FEA modeling passed the design criteria?

The answer is close to zero, but even if it does happen by stroke of fortune, the design is not the optimal design – which means that although design requirements are met and validated by FEA, there is always scope of improvement either in terms of cost or in terms of performance. In general, it is not unusual to reach the optimal design in 15 to 20 iterations.

An analyst know the pain of creating a detailed finite element simulation model. Most of the steps involved, such as geometry cleaning and meshing, are very time-consuming, and they are primarily driven by geometry. Let’s look at the workflow in more detail:

An analyst in automotive industry often performs finite element modeling work in Hypermesh, stress analysis in Abaqus, optimization in Optistruct, and durability in Fe-Safe or N-code. An analyst in the aerospace industry often performs CAD composites work in CATIA, finite element modeling in Abaqus CAE, stress analysis in Abaqus or Nastran, and durability in Fe-Safe. An analyst working in other industries has his own suite of FEA tools to work with. The entire process requires data flow from one simulation code to the other. This means output from one code serves as an input to the other. Quite often this work is also done manually by the analyst.

This means that in situations where optimal design is obtained in 20 iterations as mentioned above, an analyst has to perform geometry cleaning 20 times, create FE meshes manually 20 times, and also transfer the simulation data from one piece of code to the other 20 times. By the time these design iterations are over, the analyst’s face and computer looks somewhat like this:

Let analysts remain as analysts and let simulation robot do the rest!

The traditional job of finite element analyst is to build robust high fidelity simulation models that gives correct results under real life load applications. The analyst is not an FE robot who can perform repetitive tasks with ease. In situations like one mentioned above, it makes perfect sense to let FE analyst create a robust FE model only once per FE code involved. Subsequently introduce a simulation robot that can capture hidden steps and workflow, create a script and execute that script multiple times. This simulation robot is called ISight. […]

Inventor 2017 R2 has introduced some useful new ballooning functionality in addition to some techniques you may not have been previously aware of.  Balloon sorting has been introduced this release, and works very well in cases where multiple balloons have been attached into one grouping.  Let’s take a look at the steps to accomplish this:

balloons1

1. Typically, balloons might look like this to start.

 

balloons2

2.Right click the balloon you want the others attached to and select one of the “attach” options.

 

balloons3

3. Pick the other items you want attached.

 

balloons4

4. Right click the balloon group and select “Sort Balloons”.

 

balloons5

5. The result should look something like this after deleting the previous balloons.

Managing and tracking Teamcenter administrative data across multiple environments was never easy. Companies have relied on a wide variety of solutions for this – from manual process-based solutions like cheat sheets to automated custom build scripts to wrap numerous Siemens administrative utilities together. Some companies had a great deal of success in establishing a corporate standard Teamcenter environment from scratch using custom solutions; even for them, however, tracking the changes to admin data inside their Teamcenter environments or comparing admin data between different environments was a very tedious process. Not anymore, with the new set of admin data management capabilities Siemens introduced with Teamcenter 11.2

Now, we can easily perform the following four broad scenarios on nine types of administrative data (Access Manager rules, Organization data, Preferences, Projects, Revision rules, Saved queries, Style sheets, Subscriptions and Workflows), both in UI-based Teamcenter Environment Manager (TEM) and command line utilities modes.

1.      Analyzing how administration data is configured in any environment

A detailed administrative data report can be generated for any Teamcenter environment with all nine admin categories or with a few selected categories, or even a partial set from a specific category based on the filter set. These reports are static HTML reports and don’t have a live connection to the Teamcenter environment. These reports can be used to:

  • Document and review admin data configurations of any environment
  • Troubleshoot issues with any configuration
  • Include with IR for GTAC analysis of problems
  • Use for periodic reviews of production environments
  • Use as environment hand-off document
  • Use as a training document
  • Use to capture snapshots of admin data configurations at specific points in time

2.      Copying entire administration data or a subset from one environment to another

We can export administration data from one site and import it to another. This is very useful when we must ensure that one environment is configured the same as another, such as a test or training environment. We can also set up teams to work on specific parts of the administrative data in different test environments and then export only the administration data that changed from that environment. We can then consolidate the changes made by different teams by importing all of the administration data from multiple export packages into one environment. During the import, a dry run mode is also available, and it generates a detailed Java doc style report after import describing what changed. The tool also provides five broad categories for conflict resolution and merge during the import, but there is no graphical interface yet for manually overriding specific admin data.

3.      Comparing administration data between two sites

We can generate a report that compares the administration data from a source site to a target site. This can help us to determine the cause of differences in site behavior, determine the differences between customer environment and out of the box Teamcenter environment, quickly check if a new environment established using custom scripts is configured the same as a reference environment, or to see what is common and what is different between sites during a site consolidation effort.

4.      Tracking the impacts to an environment as administration data is imported over time

We can quickly determine when a particular change was introduced using a site’s administration data import history report. This report is automatically generated and maintained upon each successful import to a site.

These new capabilities are part of Siemens’ efforts to reduce the Teamcenter cost of ownership and help companies reduce IT costs through:

  • Setup environments becoming faster by automation instead of manual steps
  • Quicker learning curves by standardized and automated documentation
  • Easier bundling of admin data with software bundles
  • Faster troubleshooting

Siemens hasn’t deprecated any existing admin data utilities with the introduction of these new tools. All custom solutions using existing utilities should continue to work as is. The new tools use TCXML and closure rules behind the scenes, so it brings all related business objects used with admin data as a complete package as defined in the closure rules.

Do you have any questions about the new Teamcenter capabilities? Leave a comment and we’ll help you.

ilogic-snipSometimes CAD can be used to start establishing PLM practices. Since PLM systems rely on data to be effective, ensuring consistent and correctly-entered information is paramount. Things like classification with properties and meta-data can rely on CAD very heavily to be effectively used. For example, let’s consider the classification and data for a machined part. If the part is going to require machining, we could assign it a classification of “Machined.” Since the part is going to be machined, we would want to ensure that “Stock Size” is one piece of meta-data to be tracked. Most CAD systems have a way to ensure this “Stock Size” is at least filled out, and some could even be automated to calculate the stock size without any user intervention. Of course a repeatable logic would need to be utilized, but once that is done, time spent completing stock size calculations and potential errors would be eliminated.

 

Case in point: Utilize iLogic in Autodesk Inventor to calculate stock size for machined parts. Once this is done, users can forget about manually checking all the measurements; all they need to do is flag the part as “Machined” and the system does the rest!

boltvolume

ilogic-iterateA while back, I was visiting a customer with an interesting design challenge. They happened to be a specialty fastener manufacturer, and a big part of their design work includes the development of the part geometry (and associated tooling dies) as it goes through the forging operations to produce the final part. Just imagine that every change of the component from one forming operation to the next must maintain the same part volume. If the bolt’s head is shortened, then it must also increase in diameter to maintain the same volume. When making a bunch of design changes, you can only imagine how many attempts must be made at changing parameters to get the volume correct.

Since this customer is using Autodesk Inventor, there is an automation environment called iLogic that can be used to solve this challenge. With a bit of minor customization in iLogic, an iterative process can be developed to automatically adjust one parameter when another changes.

The following code could be adapted in iLogic to satisfy many similar situations:

Parameter.UpdateAfterChange = False
Dim CurrentVolume As Double
Dim VolumeDelta As Double
Dim OldVolume As Double
Dim PercentChange As Double
'reset the percent to a high value so the routine runs
PercentChange = 10

If HeadDepthChange <> 0 Then
    OldVolume = CDbl(iProperties.Volume)
    HeadDepth = HeadDepth - HeadDepthChange
    'iterate until volume nearly matches
    While  Abs(PercentChange) > .00000000001
        RuleParametersOutput()
        InventorVb.DocumentUpdate()
        ThisApplication.ActiveView.Update()
        
        CurrentVolume = CDbl(iProperties.Volume)
        VolumeDelta = OldVolume-CurrentVolume
        Percentchange = VolumeDelta / OldVolume
        HeadDia = HeadDia + HeadDia*PercentChange/2
    
    End While

    CurrentVolume = CDbl(iProperties.Volume)
    VolumeDelta = OldVolume-CurrentVolume
    Percentchange = VolumeDelta / OldVolume
'    MessageBox.Show(PercentChange , "Final Percent of Change")
    MessageBox.Show("Original Volume = " & OldVolume & "  New Volume = " & CurrentVolume & "  Volume Difference = " & VolumeDelta, "Volume Change")
    
    HeadDepthChange = 0
End If

When I think of the countless customers I have consulted with over the years, it amazes me how many don’t use parameters to control the design and capture design intent! What is a parameter, you ask?  A parameter can be thought of in two ways when it comes to CATIA V5. Parameters are built the moment you start a new part – as you can see in the image below, we already have parameters for the Part Number, Nomenclature, Revision, Product Description, and Definition created automatically. Parameters are being created each time you build any feature.  These types of parameters are known as system parameters.

new_part_parameters

You can and should build your own parameters to define your design intent. It’s every bit as important during the initial stages of a design to define your intent this way as it is to make sure sketches are constrained properly. In fact, it helps you in your sketch constraints (every constraint is a feature that has parameters associated to it). In this simple example of a piece of standard rectangular tubing shown below, there are constraints defining the height, width, wall thickness, and radii. Even though this is very easy to create, if I am a designer I would want to design it in such a way that I never have to waste any time designing a piece of rectangular tubing again. If I am a design leader, I feel the same and don’t want any of my designers doing this again in any design that involves any piece of rectangular tubing. The use of parameters will get us there!

RECTANGLUAR TUBING SKETCH

 

The parameters I am talking about are user defined parameters. Simple to create but very, very powerful in their functionality.  The simplest way to create a user defined parameter in CATIA V5 is through the fx icon found on the Knowledge toolbar.

knowledge_toolbar

You might be thinking, where have I seen that icon before? Oh yeah, in Excel when I need to create a formula for my cell. That is the point we are making here! In Excel, I use this function to compute things for me and make it easy to come up with a desired result.  In CATIA, we will create some parameters and then, when necessary, assign formulas to them to come up with our desired result.  When you click on the icon, you get the Formulas dialog and when you click on the drop down list next to the New Parameter of Type button, you can see you have many, many options.

new_parameters_types

[…]

© Tata Technologies 2009-2015. All rights reserved.