Image

Read Part 1 here.

So, what does a structured process to data migration and translation look like?

First a few definitions:

  • Source system – the origin of the data that needs to be translated or migrated. This could be a database or a directory structure.
  • Target system – the final destination for the data. On completion of the process, data in the target should be in the correct format.
  • Verify – Ensure that data placed in the target system is complete, accurate, and meets defined standards.
  • Staging area – an interim location where data is transformed, cleaned, or converted before being sent to the target.

The process consists of five steps as shown below:

picture1The process can be described as follows:

  • Data to be migrated is identified in the source system. This is an important step and ensures that only relevant data is moved. Junk data is left behind.
  • The identified data is extracted from the source system and placed in the staging area.
  • The data is then transformed into a format ready for the target system. Such transformation could be a CAD to CAD translation, a metadata change, or a cleaning process. Transformation may also entail data enrichment – for example, append additional properties to the objects so they can be better found in the target system.
  • Transformed data is then loaded into the target system. This can be done automatically via programs or manually, dependent on the chosen method. Automatic routines can fail and these are flagged for analysis and action.
  • Once data is loaded, validation is carried out to ensure that the migrated data is correct in the target system and not corrupted in some fashion.

The process as described above is shown at working level:

picture2

Shown in this diagram are two software tools – extractors and loaders. These are usually custom utilities that use APIs, or hooks into the source and target systems, to move the identified data. For example, an extractor tool may query a source PLM system for all released and frozen data that was released after a given date. Once this search is complete, the data identified by this will be downloaded by the extractor from the PLM system into the staging area.

In a similar manner, a loader will execute against a correct data set in the staging area and insert this into a target system, creating the required objects and adding the files.

It is highly recommended that pilot migrations be carried out on test data in developmental environments to verify the process. This testing will identify potential bugs and allow them to be fixed before actual data is touched.

Such a structured process will guarantee success!

PDF Publishing

‘Nuff said.

*and there was much rejoicing*

Well, maybe I could add a little more detail. It has long been known that the PDF is the currency of visual data exchange. All too often, I work with users and organizations that have to print PDFs outside of Vault, creating an uncontrolled document. If you were using the item master (discussed by my colleague here), you could attach it to the item; however, keeping it up to date is still going to be a manual process.

Now, thanks to the #1 most requested feature being implemented, that will no longer be an issue. Vault will now publish PDFs as part of your release process (as part of a transition action in a lifecycle change). This file will be categorized differently than the native CAD file, or even the DWF visualization file. The new category is called “Design Representation,” which can then be assigned its own set of rules, properties, and lifecycles.

As of this release, we have the ability to publish 2D file formats: DWG and IDW; that means either AutoCAD based files or Inventor drawings can be published to PDF. At some point, Autodesk may need to add the 3D PDF generation that was added to Inventor recently – which, by the by, could be used to publish all of the new Model Based Definition (MBD) annotations Inventor 2018 has added. I suspect we could see 3D publishing in the next release, or even a mid-year “R2” release (if there is an “R2;” who knows at this point).

Questions, comments, and celebrations welcome.

My last post outlined the significance of Product Cost Management (PCM) for OEMs and Suppliers to drive profitability and continuous improvement throughout the entire supply chain.

Ideally, PCM needs to be done early in the product development cycle, as early as the conceptual phase – design and supplier selection is much more flexible early in the process – so it is important to enable cost engineering during the front end of product development and ensure profitability with control over costs for parts and tooling.

Not everyone can optimize cost early, or not in all situations. PCM processes and tools may also need to be applied in later stages of the product lifecycle. Even when cost models and consultation based on facts get applied early in the lifecycle, there might be a need to do it several times over the lifecycle, so PCM needs to support the cost model across all corporate functions from product development to sales and establish a single consistent repository for estimating and communicating cost with repeatable processes and historical information. As PCM is spread over the product lifecycle, it’s important to take an enterprise-wide approach to costing. An ideal PCM system needs to align with the product development process managed in a PLM system, so there is lot of synergy between a PLM and PCM.

The most commonly used tools for PCM – spreadsheets and custom programs that conduct simple rollups – are not suitable for enterprise-class wide processes; these solutions do not provide the details required to develop credible cost models. They also make it very difficult for designers to compare products, concepts, and scenarios. Spreadsheets fail due to quality problems and the inability to implement them effectively on an enterprise scale, resulting in different product lines, geographies, or lines of business having different approaches. Non-enterprise approaches also make it difficult to reuse information or apply product changes, currency fluctuations, burden rates updates, or commodity cost changes

By extending an enterprise wide system like PLM for PCM functions, cost management is effectively communicated and captured to institutionalize it for future product programs.  This eliminates disconnected and inconsistent manual costing models, and complex difficult to maintain spreadsheets.  This also supports easy, fast, and reliable impact analysis to incorporate product changes accurately into costs with visibility to all cost factors and make these processes repeatable. The PCM process can also leverage the existing 3D model parametric data managed in PLM systems to extract the relevant parameters such as thickness, surface, and volume for the feature based calculations. Other PLM data that can be reused for PCM includes labor rates from engineering project management, material costs from material management modules, bill of materials/process and tooling involved with engineering and manufacturing data management. An integrated PLM and PCM solution is also important for efficiency and allowing companies to reuse both product data and cost models to facilitate continuous improvement over time .

In the next post of this series, I explain how the Siemens PLM Teamcenter suite supports PCM.

This post was originally created in January 2017.

With all the buzz about Additive Manufacturing, or 3D Printing, in the manufacturing world today, there is a lot of mystery and confusion surrounding the common practices and techniques. So, this week’s blog post will address a common type of 3D printing known as Powdered Bed & Inkjet 3D Printing (3DP).

What is Powdered Bed & Inkjet 3D Printing?

It is actually part of a broader category, commonly referred to as a Granular Based Technique. All granular based additive manufacturing techniques start with a bed of a powdered material. A laser beam or bonding agent joins the material in a cross section of the part. Then the platform beneath the bed of material is lowered, and a fresh layer of material is brushed over the top of the cross section. The process is then repeated until a complete part is produced. The first commercialized technique of this category is known as Selective Laser Sintering, though the main point of discussion here is Powdered Bed & Inkjet 3D Printing.

Invented in 1993 at Massachusetts Institute of Technology, it was commercialized by Z Corporation in 1995. This technology uses a powdered material, traditionally a plaster or starch, and is held together with a binder.  More materials are available now, such as calcium carbonate and powdered Acrylic.

Though 3DP is a granular (or powder) based technique, it does not use a laser to create a part. Instead, a glue or binder serves to join the part. It is also worth mentioning that this type of technique is where the term 3D Printing originated from, as it uses an Inkjet style printing head.

What Are the Advantages of this Process?

This process is one of the few Rapid Prototyping Techniques that can produce fully colored parts, through the integration of inks in the binders.

In addition, the material costs for this particular technique are relatively low, due to their wide commercial availability.

Because parts are created in a bed of material, there is no need to use support structures, like in other forms of rapid prototyping. This helps to prevent secondary operations and machining.

Another advantage of the material bed is the ability to stack multiple parts into the build envelope. This can greatly increase the throughput of a 3DP machine.

What Are the Disadvantages of this Process? […]

Standing on the beach, overlooking the bountiful, yet imperfect, harvest, he pondered the situation in front of him. “Why are all of my troop mates eating these sand-covered sweet potatoes? In the beginning, they were delicious…and without the sand. Now? these wonderful treats are all but inedible. What if I…

This is the beginning of tale based on a scientific research project, though may have evolved into something of an urban legend. The idea is that scientists in Japan, circa 1952, were studying the behaviors of an island full of macaque monkeys. At first, the scientists gave the monkeys sweet potatoes. After a period of time, the scientists then started covering the sweet potatoes in sand to observe how they would react. Not surprisingly, the monkeys still ate the treats, however begrudgingly. Then, the story goes, a young monkey took the vegetable to the water and washed it off. He discovered that it tasted as good as it did before the sand. Excitedly the young monkey showed this discovery to his mother. Approvingly, his mother began washing hers in the water as well.

Still, the vast majority still went on, crunching away on their gritty meals. Over time, a few more monkeys caught on. It wasn’t until a magic number of monkeys were doing this – we’ll say the 100th – that seemingly the entire troop of monkeys began rinsing their sweet potatoes off in the water.

Call it what you will – social validation, the tipping point, the 100th monkey effect, etc. It all comes down the idea that we may not try something new, however potentially beneficial, until it’s “OK” to do so. Cloud solutions for PLM could be coming to that point.  These products have been in the market for a few years now, and they mature with every update (and no upgrade headaches, either).

In the near future, it is forecasted that “Within the next three years, organizations have the largest plans to move data storage/data management (43%) and business/data analytics (43%) to the cloud,” as reported by IDG Enterprise in their “2016 IDG Enterprise Cloud Computing Survey.”  Another survey, “2017 State of the Cloud Survey” by Rightscale, is seeing that overall challenges to adopting cloud services have declined. One of the most important matters, security, has fallen from 29% of respondents reporting it as a concern to 25%. Security is still a valid concern, though I think the market is starting to trust the cloud more and more.

With our experience and expertise with PLM solutions in the cloud, Tata Technologies can help you chose if, when, and how a cloud solution could be right for your company. Let us know how we can help.

My last post outlined the importance of having an integrated PLM and PCM solution. Siemens PLM implements this vision though its Product Cost Management application bridging the gap between traditional PLM and ERP. With Teamcenter PCM, companies can migrate from disconnected tools to an integrated solution. The integrated IT environment platform helps them to manage cost knowledge with consistent data, build standardized obligatory cost methods and models, and create fact-based and cost-driver-transparent calculations; at the same time, it enables cross-functional collaboration and communication.

Product Costing

The highlights of product costing capabilities include Cross-functional Calculation of Pre-/Quotation Costing, Calculation of R&D Costs, Purchase Price Analysis, Open Book Accounting, Profitability Calculation/Project-ROI, Differentiated overhead calculation (freely selectable degree of detail), Process-based bottom-up calculation and cost models (cost engineering approach), Cost rate calculation with company-owned data records, Integrated cycle time calculators (die casting, injection molding, machining, MTM, client proprietary, etc.), Versioning of calculations (documented change history), Flexible simulations of what-if scenarios (e.g., production alternatives, volume adjustments), Profitability calculations (return-on-investment over product lifecycle), Flexible reporting functions (e.g., multi-stage cost driver analysis), Integration toolkits for data exchange with customer specific systems (e.g., ERP), Import and export of cost breakdown sheets (supplier and customer), Multi-lingual, multi-currency, freely configurable costing methodologies, cash flow calculation, and data management for reuse.

Tool Costing 

Teamcenter PCM’s parametric and 3D-based tool costing has support for both quotation costing in tool-making and cost analysis in tool purchasing. Tool Costing delivers fast, reliable and detailed information on manufacturing times and costs. Tool Costing also enables both buyers and tool manufacturers to precisely and repeatably understand knowledge data, secure this information within the enterprise and document it in an audit-compliant manner with the option of using 3D data for calculations. Teamcenter provides a variety of tool technologies, including injection molding, die casting, and composite tools. 3D data can be read automatically or manually to create the geometry parameters. Both the tool buyer and the tool maker – whether injection molding, die casting, cutting, stamping, or other production tools – can make decisions regarding the tool costs that are fully integrated within the Teamcenter product cost management solution.

Cost Knowledge Management 

Teamcenter PCM has a standard and extendable cost knowledge base for costing calculations including worldwide factor costs (labor, production area, energy, interest rates etc.), physical material data of all prevalent materials, reference machines with economic and technical data for all prevalent manufacturing technologies, and complete reference processes for many manufacturing methods with the ability to integrate customer specific corporate costing library.

Profitability Calculation

The integrated profitability calculation in Teamcenter gives project and product controllers and managers a powerful business case analysis and decision-making tool while delivering the necessary instruments to ensure success, including: Consolidation of multiple product(s) in a single project (general project data, lifecycle, quantity progression, unit costs and prices, etc.), year slice presentation of cash flows for project-specific investments (plants, tools, engineering, etc.), dynamization of unit costs and sales prices for the individual year slices in the product lifecycle, calculation of common profitability ratios such as net present value (NPV), internal rate of return (IRR), return on capital employed (ROCE), return on sales (ROS), amortization period (payback), project-based profit and loss accounts, as well as discounted cash flow accounts and trend curve for cumulative (discounted) cash flow, variant calculation, and sensitivity analyses for comparing various what-if scenarios and premises.

This is Part 3 in my series on the hidden intelligence of CATIA V5. To quickly recap what we have already talked about, in my first post I discussed the importance of setting up and using parameters and formulas to capture your design intent and quickly modify things that you know are likely to change. We took those principles a bit farther in my second post and discussed the value of building a design table in those situations when you may have a design with parameters that will vary and that you want to use many times. In that case you could see that we had our rectangular tubing part and could modify its wall thickness, height, and width to make several iterations of basically any size of tubing one would ever need! You would simply keeping doing a Save as… and placing those parts in your working directory to be added into an assembly at some time (I assume).

This methodology would work fine, but today I want to focus on a very cool spin on this theory by building a catalog of your most commonly used parts which are similar enough to be captured in a single model. Using our tubing model, and picking up where we left off, we have a spreadsheet that defines the parameters that change. All we would need to do to build a catalog of each iteration of the design table is add a column to the spreadsheet named PartNumber just as I have it with no spaces in the name and then associate that to the ‘Part Number’ intrinsic parameter that is created automatically when you being a model.

Let’s get started.  I will open both the model and the spreadsheet, edit the spreadsheet with the column, and then add in some part numbers.

Part numbers added

When you save the file, the field should appear in CATIA when you click on the Associations tab. […]

What is data migration and translation? Here is a definition that will help:

  • Information exists in different formats and representations. For example, Egyptian hieroglyphics are a pictorial language (representation) inscribed on stone (format)
  • However, information is only useful to a consumer in a specific format and representation. So, Roman letters printed on paper may mean the same as an equivalent hieroglyphic text, but the latter could not be understood by a English reader.
  • Migration moves data between formats – such as stone to paper
  • Translation moves data between representations – hieroglyphics to roman letters

What must a migration and translation achieve?

  • The process preserves the accuracy of the information
  • The process is consistent

In the PLM world, the requirement for data translation and migration arises as the result of multiple conditions. Examples of these include changes in technology (one CAD system to another CAD system), upgrades to software (from one level of a PLM system to later version), combination of data from two different sources (CAD files on a directory system with files in a PDM), acquisitions and mergers between companies (combine product data) and integration between systems (connect PLM to ERP).

However, migrations and translations can be fraught with problems and require considerable effort. Here are some reasons: […]

This post was originally created in January 2017.

With all the buzz about Additive Manufacturing, or 3D Printing, in the manufacturing world today, there is a lot of mystery and confusion surrounding the common practices and techniques. So, this week’s blog post will address a common type of 3D printing known as Direct Metal Laser Sintering (DMLS).

What is Direct Metal Laser Sintering?

DMLS is actually part of a broader category, commonly referred to as a Granular Based Technique. All granular-based additive manufacturing techniques start with a bed of a powdered material. A laser beam or bonding agent joins the material in a cross-section of the part. Then the platform beneath the bed of material is lowered, and a fresh layer of material is brushed over the top of the cross section. The process is then repeated until a complete part is produced. The first commercialized technique of this category is known as Selective Laser Sintering.

The Selective Laser Sintering technique was developed in the mid-1980s by Dr. Carl Deckard and Dr. Joseph Beaman and the University of Texas at Austin, under DARPA sponsorship. As a result of this, Deckard and Beaman established the DTM Corporation with the explicit purpose of manufacturing SLS machines.  In 2001, DTM was purchased by its largest competitor, 3D Systems.

DMLS is the same process as SLS, though there is an industry distinction between the two, so it is important to make note of this. DMLS is performed using a single metal, whereas SLS can be performed with a wide variety of materials, including metal mixtures (where metal is mixed with substances like polymers and ceramics).

What Are the Advantages of this Process?

[…]

When working with our customers, from time to time, we’ll get questions on why they see unexpected results in some of their searches. This typically happens when they search without wildcards (I’ll explain later). In this blog post, I hope to shed some light on what can be a confusing experience for some Vault users.

The search engine in Vault operates on a on a general computer science principle called general Tokenization. This process essentially chops up the indexed properties into chunks called tokens. When a user searches in Vault (either quick search or advanced find), the search engine will attempt to match the tokens in the search string to the tokens in the appropriate properties.  Before going further, I’ll explain how Vault does the slicing and dicing.

First, there are three categories of characters (for our purposes, at least); alpha [a-z, A-Z], numeric [0-9], and special [#^$, blank space, etc.].  Vault will parse the string and sniff out groups of characters belonging to a category.  For instance, ABC123$@# would be tokenized into 3 individual tokens:

  • ABC
  • 123
  • $@#

Again, what happened is that Vault saw the first character, A, and understood it to be an alpha character. Vault then asked “Is the next character an alpha, too?” to which the answer was yes, so the token became AB. C was then added to the initial token, as it too was an alpha character.  However, the answer was “No”, when it came to the character 1.  Vault finished its first token and began the next one, now that it sensed a different category of character. Vault continued this line of questioning with the subsequent characters.

Another example might be a file name like SS Bearing Plate-6×6.ipt. Here, we have 8 tokens:

  • SS
  • Bearing
  • Plate
  • 6
  • x
  • 6
  • ipt

Now, you may have caught the missing period. Vault will only tokenize six special characters – all others are ignored. These special special characters (sorry, had to do it) are:

  • $ (dollar sign)
  • – (dash)
  • _ (underscore)
  • @ (at symbol)
  • + (plus)
  • # (octothorpe, aka number sign)

So now where do the unexpected results come in? This usually happens when an incomplete token is used without wild cards. For example, a user wants to find a specific mounting bracket. This user then types in “mount,” expecting that to be enough. In our hypothetical Vault environment, the results would return “Fan mount.ipt” but not “Mounting bracket.ipt” like they intended. Why? Remember that Vault is trying to match exact tokens (again, without wild cards).

If the user had entered mount*, the results would return the expected “Mounting bracket.ipt” as the user intended.

Moral of the story?  Always use wild cards…always.  No, really, all the time.  For everything.

© Tata Technologies 2009-2015. All rights reserved.