Posts

One of the first things I typically discuss with customers concerning file management is the relationship between files in their engineering data.  This is especially the case when working with data from 3D CAD systems like Autodesk Inventor. When you have Assemblies, parts, drawings, and presentations all with linked file relationships, it can be extremely challenging to manage this data without a tool that understands and maintains all the file links.  Simply renaming a single file can cause all sorts of problems if done in Windows Explorer.  Here are some of the areas where file relationships matter.

  1. Part, Assy, Drawing – As previously mentioned, 3D CAD data can be a challenge to manage.  Simply understanding where a file is used (or linked) can be tremendously helpful.
    vault-where-used

    “Where Used” within Autodesk Vault

  2. Copy Design – There is a “copy design” tool in Autodesk Vault that can make it much easier to reuse existing designs in the creation of variants based on the original.  This also reduces the amount of duplicate data in Vault because so much more is reused rather than recreated.
  3. Renaming – In many workflows, files are initially created using descriptive filenames.  These files then need to be renamed once a design is approved and will go into production.  With Inventor data, renaming files in Windows Explorer will break the links between parts, assemblies, and drawings. The files then have to be manually relinked, which can become extremely troublesome if a file was used by more than one assembly without knowing it.  When someone opened up the other assembly, the file would be missing and very difficult to locate.  Vault simply fixes all the file references whenever a file is renamed so this isn’t a problem.
  4. Moving – Files that are moved in Windows Explorer can cause the same problems as renaming, but usually because of the way Inventor uses project files. Using Autodesk Vault with a single Vault type project file eliminates many of the challenges in moving files to more relevant or common locations.
  5. Attachments – Attachments in Vault can also be tracked.  One example might be a design specification document that might apply to a whole class of components.  The design spec can be attached to the relevant designs.  If the design spec document changes, you can simply do a “where used” from it to see which files will be impacted by the specification change.

“To specialize or not to specialize, that is the question.”

The question of specializing vs. generalizing has arisen in so many aspects: biology, health, higher education, and of course, software.  When one has to decide between the two ends of the spectrum, the benefits and risks must be weighed.

muskrat_eating_plantAs environments have changed over time, animals have had to make a decision: change or perish. Certain species adapted their biology to survive on plants – herbivores – others, meat 0 carnivores.  When in their preferred environments with ample resources, each can thrive.  However, if conditions in those environments change so that those resources are not as bountiful, they may die out. Then comes the omnivore, whose adaptation has enabled them to survive on either type of resource. With this wider capability of survival, there comes a cost of efficiency. The further you move up through the food chain, the less efficient the transfer of energy becomes.  Plants produce energy, only 10% of which an herbivore derives, and the carnivore that feeds on the herbivore only gets 10% of that 10%; i.e. 1% of the original energy.

Three hundred trout are needed to support one man for a year.
The trout, in turn, must consume 90,000 frogs, that must consume 27 million grasshoppers that live off of 1,000 tons of grass.
— G. Tyler Miller, Jr., American Chemist (1971)

doctor-1149149_640When it comes to deciding on a course of action for a given health problem, people have the option to go to their family doctor, a.k.a. general practitioner, or a specialist. There are “…reams of papers reporting that specialists have the edge when it comes to current knowledge in their area of expertise” (Turner and Laine, “Differences Between Generalists and Specialists“)., whereas the generalist, even if knowledgeable in the field, may lag behind the specialist and prescribe out-of-date – but still generally beneficial – treatments.  This begs the question, what value do we place on the level of expertise?  If you have a life-threatening condition, then a specialist would make sense; however, you wouldn’t see a cardiologist if your heart races after a walk up a flight of stairs – your family doctor could diagnose that you need some more exercise.

graduation-907565_640When it comes to higher education, this choice of specializing or not also exists: to have deep knowledge and experience in few areas, or a shallower understanding in a broad range of applications. Does the computer science major choose to specialize in artificial intelligence or networking? Or none at all? How about the music major?  Specialize in classical or German Polka? When making these decisions, goals should be decided upon first. What is it that drives the person? High salary in a booming market (hint: chances are that’s not German Polka)? Or is the goal pursuing a passion, perhaps at the cost of potential income? Or is it the ability to be valuable to many different types of employers in order to change as the markets do? It’s been shown that specialists may not always command a higher price tag; some employers value candidates that demonstrate they can thrive in a variety of pursuits.

Whether you’re looking to take advantage of specialized design products (for instance, sheet metal or wire harnesses), or gaining the value inherent in a general suite of tools present in a connected PLM platform that can do project management, CAPA, and Bill of Materials management, we have the means. A “Digital Engineering” benchmark can help you decide if specialized tools are right for your company. Likewise, our PLM Analytics benchmark can help you choose the right PLM system or sub-system to implement.

Specialize, or generalize? Which way are you headed and why?

In this era of new levels of globalization, product companies are faced with market pressures from global competition and price deflation. Today they seek alternate sources of profitable revenue growth enabled by value-add service products. Developing a service-based revenue stream and then delivering product service that is both effective and profitable has its own challenges, however. Even mature service organizations are seeking new approaches to reach a significantly higher quality of service delivery.

Today in a typical product company, there is no single application to manage the the data and the decision points required to deliver effective service. Multiple enterprise applications including PLM, ERP, and often a combination of local databases, spreadsheets and stand alone IT systems are involved in service management. This results in fragmented information and knowledge processes around service delivery.

A new approach centered on incorporating service lifecycle management (SLM) as an integral part of product lifecycle management (PLM) is required in order to to achieve significant improvement in service readiness and delivery. First, this approach focuses on making complex products easier and less costly to maintain, and allowing for more effective allocation of service resources. The second key component is managing the complexity of service information that will reduce the cost and time to create and deliver critical service documentation and service records, at the same time improving the quality and efficacy of this information.

With SLM approached as an extended PLM process, design information can be used to bootstrap and enhance service planning, and product changes and updates are directed to modify service work instructions, and field experience provides up-to-date insight into product quality. The bulk of the information required for services such as illustrations, schematics, and work instructions already exists within the engineering organization and can be repurposed with a relatively little effort. 3D CAD models and Bills of Materials can be used to create everything from exploded wireframe views to photorealistic rendering, and to remove and replace animations that help in service execution. Manufacturability and ergonomic simulations can be used to improve the safety and efficiency of repair procedures.

The expanded PLM system needs to act as a centralized repository of the service bill-of-materials (sBoM) along with Engineering & Manufacturing BoM so that service items, which are mostly design and manufacturing items repurposed for service, can be synchronized to reflect the most up-to-date state of information. This synchronization is possible when SLM is part of PLM and shares the same configuration and change management processes

This way, enterprise PLM systems become the digital backbone of the entire product life cycle – including  SLM – and SLM becomes a dynamic process connected with PLM that continues throughout the useful life of the product or asset. This reduces process fragmentation and provides rich end-to end context for better and more profitable service.

The combined PLM and SLM approach, along with new service models based on the latest technologies (such as the Internet of Things), enables brand owners to deliver higher quality service at lower cost, resulting in higher profit margins, enhanced brand image, and greater customer loyalty. Product or asset owners who are the end customers also benefit from increased utilization and productivity due to faster and more reliable service.

What do you think? Is your organization connected?

Today’s topic will focus a little on the licensing side of CATIA – namely CAT3DX and the theory of what it is here for.

Several years ago, Dassault changed the way they were packaging CATIA V5 by introducing PLM Express as a way to buy it; my colleague Jason Johnson explained this in a previous post. As he had mentioned, this was referred to as CATIA TEAM PLM and was really designed to allow for connecting current CATIA V5 users of their new PLM offering, which was ENOVIA SmarTeam.  He also wrote briefly about the configurations and trigrams that make up the software.  The easiest way to think about a trigram per se is to know that a group of trigrams make up a configuration, and trigrams by themselves give you access to particular workbenches – or in some cases only add toolbars to existing workbenches.

Why does this matter? Because there is a new sheriff in town called 3DEXPERIENCE. Much more than a PLM system, the 3DEXPERIENCE platform suite of tools will assist the user in management of their daily work, projects, processes, BOMs, documents, CAD, etc.  While an old CAC (CAT) license – which was the base configuration for PLM Express – would give you access to SmarTeam by bundling in TDM and TEM trigrams, the new CAT3Dx will now give you all of that, as well as access to the ENOVIA 3DEXPERIENCE Platform, by giving you the PCS and CNV trigrams as well. These are the minimum trigrams needed to connect to the platform (the price for admission).

The Dassault idea is still the same – help CATIA v5 users move away from file-based, directory-based storage (which has always presented its own challenges) and help companies regain control of their data via the new platform. The only caveat to this is that you would install ENOVIA to manage your data, which is not as simple as throwing in a set of discs like SmarTeam was. ENOVIA requires the setting up of a database using SQL or Oracle, and then configuration of the various pieces (web server, authentication, java virtual machine, etc.).  Once this has been configured, the base PCS, CNV combination gives you the ability to vault your data and set up workspaces for where and how it will be stored, as well as do some level of change management on it. (set up Change Requests and Routes for how your data will be routed) to get it through its life cycle to release.

Creation Menu

 

The ENOVIA applications that come with the PCS, CNV combination are Classify & Reuse, Collaboration & Approvals, Collaborative Lifecycle, Design IP Classification, Exchanges Management, My Collections, My Issues, My Route Tasks, and X-CAD Design. These are plenty enough to help your team begin to get to a single source of truth – meaning, never having to guess what state the latest data is in.

ENOVIA Apps

You also have access applications for business intelligence information. This includes access to the latest technology of Dashboards.  Dashboards are ways of viewing data configured to your liking.  Not at all unlike the old igoogle portal which allowed you to customize your view of news, etc. In 2012 Dassault acquired Netvibes.

netvibes

Information Intelligence

[…]

Autodesk Vault uses the concept of a “Local Workspace” whenever files are opened or checked out.  Essentially, whenever a Vault file is accessed, a copy is cached in the workspace on the user’s local workstation.  From a user perspective, the workspace can be ignored for much regular work.  There are several benefits of a local workspace.

  1. get-to-workspacePerformance improvement over network share – One of the problems without a PDM system is that files are opened directly across the network.  Files being accessed and edited are located on a network share, and stay there while being worked on.  In environments with multiple users working with large datasets, this can become a disaster.  When files are checked out from Vault, they are cached locally and the workstation’s drives are able to respond to changes more quickly than a network server.
  2. Offline workflows – The local workspace also allows users to retrieve data to work on while disconnected from their corporate network.  The local workspace actually acts much like a briefcase:  The user simply checks out files, disconnects from the network and works on them, and checks them back in when they return to the network and are logged back into Vault.
  3. Better distributed workforce management – For companies with distributed workforces, the local workspace can also be a big benefit.  Combining the performance and offline workflows really makes workflows possible with a distributed workforce.  All that is required is a remote VPN connection, and then files can be checked in and out of Vault.  The VPN doesn’t have to be permanently connected.  When disconnected, it will really be just like an offline workflow.  Since files that are checked out from Vault reside locally, the distributed users still have good performance while editing and saving their work.

 

This is a second look at the hidden intelligence of CATIA V5. Our topic today will focus on the creation and use of design tables. As I talked about in my last blog post, parameters and formulas can be used to drive your design from the specification tree based on your design intent. We will continue on using the rectangular tubing part and build several variations of that tubing that can be driven from a spreadsheet.

Design Table Icon

Most of the work has been already done, and although it is not necessary to have pre-defined parameters and formulas existing, the process is faster. We will begin by again looking at the Knowledge toolbar, this time focusing on the Design Table icon.

When the command is selected, a dialog appears asking for the name of the design table and also gives you a choice on whether or not you want to use a pre existing file or create one from the current parameter values.  The differences being whether or not you have an existing spreadsheet filled out already with all the tabulated values of what changes in each iteration of the design.

Design Table Dialog

 

In our case, to show the functionality we will choose the create with current parameter values option. Once that is decided, you choose which parameters you want to be driven by the spreadsheet.  In our case, we had some already created, so we changed the filter to User parameters, chose the values that were NOT driven by formulas (INSIDE and OUTSIDE RADII) and moved them to the inserted side by highlighting and clicking the arrow.

Parameters to Insert

At this point, we have defined that we want a spreadsheet to use columns for Height, Width, and Wall Thickness based on the current values in the model as it is at this moment. When we click OK on the dialog, it will ask us where we want to save the spreadsheet. I suggest that you do this in a place where anyone who uses the model can has at least read access to (i.e. a network drive).  Note that I can also change the type of file to a .txt if I do not have access to Excel® or any other software that can edit .xls files.

Read Access Directory

 

Once this has been defined, your design table is created, linked to your 3D model, and ready to be edited to include your alternate sizes. This is confirmed by the next dialog. To add in the other sizes, simply click on the Edit table… button and your editor (Excel or Notepad) should launch and simply fill in rows with your values.

Linked and ready to edit

Once you have edited and saved the values, you can close that software and CATIA will update based on your values.

Excel Modifications

 

CATIA Updated

Now you would just pick the value set you want and click OK for the change to appear on the screen.

File Updated

At any time, you can always go to make the changes by finding the Design Table under the Relations section of the specification tree and double-clicking on it.

Design Table under Relations

As you can see, it’s pretty easy to create a design table and drive your parametric file with multiple values. The world of CATIA V5 is all about re-use of data and capturing business intelligence we already know exists in all companies.  How can we help you? Tata Technologies has helped many companies time and again.

Stay tuned for Part 3!

 

 

 

 

 

 

This post was originally created on December 8, 2016.

With all the buzz about Additive Manufacturing, or 3D Printing, in the manufacturing world today, there is a lot of mystery and confusion surrounding common practices and techniques. This week’s blog post will address a common type of 3D printing known as Fused Deposition Modeling (FDM).

But first, What is Additive Manufacturing?

Additive manufacturing is the process of creating a part by laying down a series of successive cross-sections (a 2D “sliced” section of a part). It came into the manufacturing world about 35 years ago in the early 1980s, and was adapted more widely later in the decade. Another common term used to describe additive manufacturing is 3D Printing – a term which originally referred to a specific process, but is now used to describe all similar technologies.

Now that we’ve covered the basics of 3D Printing, What is Fused Deposition Modeling?

It is actually part of a broader category, commonly referred to as a Filament Extrusion Techniques.  Filament extrusion techniques all utilize a thin filament or wire of material. The material, typically a thermoplastic polymer, is forced through a heating element, and is extruded out in 2D cross section on a platform. The platform is lowered and the process is repeated until a part is completed. In most commercial machines, and higher-end consumer grade machines, the build area is typically kept at an elevated temperature to prevent part defects (more on this later). The most common form, and the first technology of this type to be developed, is FDM.

The Fused Deposition Modeling Technique was developed by S. Scott Crump, co-founder of Stratasys, Ltd. in the late 1980s. The technology was then patented in 1989. The patent for FDM expired in the early 2000s. This helped to give rise to the Maker movement, by allowing other companies to commercialize the technology.

It should also be noted that Fused Deposition Modeling is also known as Fused Filament Fabrication, or FFF. This term was coined by the Reprap community, because Stratasys has a trademark on Fused Deposition Modeling.

What Are the Advantages of this Process? […]

I mentioned the process automation concept of ISight in a previous simulation automation blog. ISight is an open source code simulation automation and parametric optimization tool to create workflows that automate the repetitive process of model update and job submission with certain objectives associated with it. The objective could be achievement of an optimal design through any of the available techniques in ISight: Design of experiments, optimization, Monte Carlo simulation or Six Sigma. In this blog post, I will be discussing various value added algorithms in DOE technique; I will discuss other techniques in future blogs.

Why design of experiments

Real life engineering models are associated with multiple design variables and with multiple responses. There are two ways to evaluate the effect of change in design variable on response: Vary one at a time (VOAT) approach or Design of experiments (DOE) approach. The VOAT approach is not viable because:

  • This approach ignores interactions among design variables, averaged and non-linear effects.
  • In models associated with large FE entities, each iteration is very expensive. VOAT does not offer the option of creating high fidelity models with a manageable number of iterations.

With the DOE approach, user can study the design space efficiently, can manage multi dimension design space and can select design points intelligently vs. manual guessing. The objective of any DOE technique is to generate an experimental matrix using formal proven methods. The matrix explores design space and each technique creates a design matrix differently. There are multiple techniques which will be discussed shortly and they are classified into two broad configurations:

  • Configuration 1: User defines the number of levels and their values for each design variable. The chosen technique and number of variables determines number of experiments.
  • Configuration 2: User defines the number of experiments and design variables range.

Box-Behnken Technique

This is a three level factorial design consisting of orthogonal blocks that excludes extreme points. Box-Behnken designs are typically used to estimate the coefficients of a second-degree polynomial. The designs either meet, or approximately meet, the criterion of rotatability. Since Box-Behnken designs do not include any extreme (corner) point, these designs are particularly useful in cases where the corner points are either numerically unstable or infeasible. Box-Behnken designs are available only for three to twenty-one factors.untitled

Central Composite Design Technique […]

Watch and learn as i GET IT subject matter expert Eric Bansen walks through the Context Toolbar feature in CATIA V6 2016X 3D Experience. The Context Toolbar feature allows you to create and manipulate geometry with in your part model. This session is a great for those who are currently learning or want to know more about CATIA V6 2016X 3D Experience. For further interest in CATIA V6 2016X 3D Experience, there is the i GET IT course “3D Experience CATIA V6 2016X New Essentials“.

Click to Join

If you are in the business of designing and engineering product, then you have PLM. This is a statement of fact. The question then becomes: what is the technology underpinning the PLM process that is used to control your designs?

Because of the way that technology changes and matures, most organizations have a collection of software and processes that support their PLM processes. This can be called the Point Solution approach. Consider a hypothetical setup below:

The advantage of this approach is that point solutions can be individually optimized for a given process – so, in the example above, the change management system can be set up to exactly mirror the internal engineering change process.

However, this landscape also has numerous disadvantages:

  1. Data often has to be transferred between different solutions (e.. what is the precise CAD model tied to a specific engineering change?). These integrations are difficult to set up and maintain – sometimes to the point of being manual tasks.
  2. The organization has to deal with multiple vendors.
  3. Multiple PLM systems working together require significant internal support resource from an IT department.
  4. Training and onboarding of new staff is complicated

The alternative to this approach is a PLM Platform. Here, one technology solution includes all necessary PLM functionalities. The scenario is illustrated below:

It is clear that the PLM Platform does away with many of the disadvantages of the Point Solution; there is only one vendor to deal with, integrations are seamless, training is simplified, and support should be easier.

However, the PLM Platform may not provide the best solution for a given function when compared to the corresponding point solution. For example, a dedicated project management software may do a better job at Program Management than the functionality in the PLM Platform; this may require organizational compromise. You are also, to some extent, betting on a single technology vendor and hoping that they remain an industry leader.

Some of the major PLM solution vendors have placed such bets on the platform strategy. For example, Siemens PLM have positioned Teamcenter as a complete platform solution covering all aspects of the PLM process. (refer to my earlier blog post What is Teamcenter? or, Teamcenter Explained). All of the PLM processes that organizations need can be supported by Teamcenter.

Dassault Systèmes have pursued a similar approach with the launch of their 3DEXPERIENCE platform, which also contains all of the functions required for PLM. In addition, both are actively integrating additional functionality with every new release.

So what is your strategy – Point or Platform? This question deserves serious consideration when considering PLM processes in your organization.

© Tata Technologies 2009-2015. All rights reserved.