Industry Insights

Read articles by industry leading experts at Tata Technoloiges as they present information about PLM products, training, knowledge expertise and more. Sign-up below to receive updates on posts by email.

“To specialize or not to specialize, that is the question.”

The question of specializing vs. generalizing has arisen in so many aspects: biology, health, higher education, and of course, software.  When one has to decide between the two ends of the spectrum, the benefits and risks must be weighed.

muskrat_eating_plantAs environments have changed over time, animals have had to make a decision: change or perish. Certain species adapted their biology to survive on plants – herbivores – others, meat 0 carnivores.  When in their preferred environments with ample resources, each can thrive.  However, if conditions in those environments change so that those resources are not as bountiful, they may die out. Then comes the omnivore, whose adaptation has enabled them to survive on either type of resource. With this wider capability of survival, there comes a cost of efficiency. The further you move up through the food chain, the less efficient the transfer of energy becomes.  Plants produce energy, only 10% of which an herbivore derives, and the carnivore that feeds on the herbivore only gets 10% of that 10%; i.e. 1% of the original energy.

Three hundred trout are needed to support one man for a year.
The trout, in turn, must consume 90,000 frogs, that must consume 27 million grasshoppers that live off of 1,000 tons of grass.
— G. Tyler Miller, Jr., American Chemist (1971)

doctor-1149149_640When it comes to deciding on a course of action for a given health problem, people have the option to go to their family doctor, a.k.a. general practitioner, or a specialist. There are “…reams of papers reporting that specialists have the edge when it comes to current knowledge in their area of expertise” (Turner and Laine, “Differences Between Generalists and Specialists“)., whereas the generalist, even if knowledgeable in the field, may lag behind the specialist and prescribe out-of-date – but still generally beneficial – treatments.  This begs the question, what value do we place on the level of expertise?  If you have a life-threatening condition, then a specialist would make sense; however, you wouldn’t see a cardiologist if your heart races after a walk up a flight of stairs – your family doctor could diagnose that you need some more exercise.

graduation-907565_640When it comes to higher education, this choice of specializing or not also exists: to have deep knowledge and experience in few areas, or a shallower understanding in a broad range of applications. Does the computer science major choose to specialize in artificial intelligence or networking? Or none at all? How about the music major?  Specialize in classical or German Polka? When making these decisions, goals should be decided upon first. What is it that drives the person? High salary in a booming market (hint: chances are that’s not German Polka)? Or is the goal pursuing a passion, perhaps at the cost of potential income? Or is it the ability to be valuable to many different types of employers in order to change as the markets do? It’s been shown that specialists may not always command a higher price tag; some employers value candidates that demonstrate they can thrive in a variety of pursuits.

Whether you’re looking to take advantage of specialized design products (for instance, sheet metal or wire harnesses), or gaining the value inherent in a general suite of tools present in a connected PLM platform that can do project management, CAPA, and Bill of Materials management, we have the means. A “Digital Engineering” benchmark can help you decide if specialized tools are right for your company. Likewise, our PLM Analytics benchmark can help you choose the right PLM system or sub-system to implement.

Specialize, or generalize? Which way are you headed and why?

This post was originally created on December 8, 2016.

With all the buzz about Additive Manufacturing, or 3D Printing, in the manufacturing world today, there is a lot of mystery and confusion surrounding common practices and techniques. This week’s blog post will address a common type of 3D printing known as Fused Deposition Modeling (FDM).

But first, What is Additive Manufacturing?

Additive manufacturing is the process of creating a part by laying down a series of successive cross-sections (a 2D “sliced” section of a part). It came into the manufacturing world about 35 years ago in the early 1980s, and was adapted more widely later in the decade. Another common term used to describe additive manufacturing is 3D Printing – a term which originally referred to a specific process, but is now used to describe all similar technologies.

Now that we’ve covered the basics of 3D Printing, What is Fused Deposition Modeling?

It is actually part of a broader category, commonly referred to as a Filament Extrusion Techniques.  Filament extrusion techniques all utilize a thin filament or wire of material. The material, typically a thermoplastic polymer, is forced through a heating element, and is extruded out in 2D cross section on a platform. The platform is lowered and the process is repeated until a part is completed. In most commercial machines, and higher-end consumer grade machines, the build area is typically kept at an elevated temperature to prevent part defects (more on this later). The most common form, and the first technology of this type to be developed, is FDM.

The Fused Deposition Modeling Technique was developed by S. Scott Crump, co-founder of Stratasys, Ltd. in the late 1980s. The technology was then patented in 1989. The patent for FDM expired in the early 2000s. This helped to give rise to the Maker movement, by allowing other companies to commercialize the technology.

It should also be noted that Fused Deposition Modeling is also known as Fused Filament Fabrication, or FFF. This term was coined by the Reprap community, because Stratasys has a trademark on Fused Deposition Modeling.

What Are the Advantages of this Process? […]

If you are in the business of designing and engineering product, then you have PLM. This is a statement of fact. The question then becomes: what is the technology underpinning the PLM process that is used to control your designs?

Because of the way that technology changes and matures, most organizations have a collection of software and processes that support their PLM processes. This can be called the Point Solution approach. Consider a hypothetical setup below:

The advantage of this approach is that point solutions can be individually optimized for a given process – so, in the example above, the change management system can be set up to exactly mirror the internal engineering change process.

However, this landscape also has numerous disadvantages:

  1. Data often has to be transferred between different solutions (e.. what is the precise CAD model tied to a specific engineering change?). These integrations are difficult to set up and maintain – sometimes to the point of being manual tasks.
  2. The organization has to deal with multiple vendors.
  3. Multiple PLM systems working together require significant internal support resource from an IT department.
  4. Training and onboarding of new staff is complicated

The alternative to this approach is a PLM Platform. Here, one technology solution includes all necessary PLM functionalities. The scenario is illustrated below:

It is clear that the PLM Platform does away with many of the disadvantages of the Point Solution; there is only one vendor to deal with, integrations are seamless, training is simplified, and support should be easier.

However, the PLM Platform may not provide the best solution for a given function when compared to the corresponding point solution. For example, a dedicated project management software may do a better job at Program Management than the functionality in the PLM Platform; this may require organizational compromise. You are also, to some extent, betting on a single technology vendor and hoping that they remain an industry leader.

Some of the major PLM solution vendors have placed such bets on the platform strategy. For example, Siemens PLM have positioned Teamcenter as a complete platform solution covering all aspects of the PLM process. (refer to my earlier blog post What is Teamcenter? or, Teamcenter Explained). All of the PLM processes that organizations need can be supported by Teamcenter.

Dassault Systèmes have pursued a similar approach with the launch of their 3DEXPERIENCE platform, which also contains all of the functions required for PLM. In addition, both are actively integrating additional functionality with every new release.

So what is your strategy – Point or Platform? This question deserves serious consideration when considering PLM processes in your organization.

So you’re a manager at a manufacturing company. You make things that are useful to your customers and you answer to the executives regarding matters such as budgets, efficiencies, timelines and deliverables. You will have at least heard of PLM; perhaps you have attended a conference or two. But how badly do you need to implement it or retool an existing setup?

Here are 10 indicators:

  1. Your staff is always late meeting deadlines. This results from poorly executed projects, inefficient processes, and lack of clear deliverables. All of these problems can be addressed by a PLM system, starting with the enforcement of common processes and followed up by proper project planning.
  2. Department costs are creeping up. You are held to a tight budget by the organization. You are always close to or exceed your budget and it is difficult to get a handle on why. A PLM system can help this in two ways: more efficient processes leading to greater productivity and by providing better information to managers.
  3. Rework is rampant. A lot of work needs to be repeated or reworked because it was not correct the first time. A PLM system can certainly help with this problem by supporting common working practices and introducing checks at crucial points.
  4. Your department is constantly battling other departments. There is a lot of finger pointing and blame that goes around the organization. No one can pin down who is responsible or when information was provided. PLM can provide automatic notifications, timestamped deliverables, and clear and unequivocal instructions.
  5. There is no accountability in your department. It is difficult to diagnose where mistakes were made and who is responsible. People are always blaming other people. A PLM system can provide objective data that allows the root cause of accountability to be addressed.
  6. Overtime is out of control. Excessive overtime worked in your department is always a concern. A PLM system can help improve productivity and give managers better information regarding where inefficiencies exist.
  7. Your competitors always seem better. Your bosses are always holding you up against your competition and showing how they are better. A PLM system can put you ahead because there’s a good chance the competition do not have a PLM system, or have not made good use of it if they do.
  8. External customers complain that they do not get the information they need. You owe your customers information at various stages during the design cycle and they often don’t receive it in a timely manner. A suitably configured PLM system can improve this dramatically.
  9. Your suppliers provide the wrong information. You are constantly going around in circles with your suppliers regarding information. But do your suppliers have the right capabilities to begin with, and do you have the capability to meet them on the same terms? PLM technology can bridge this gap.
  10. Process adherence is poor. Although you have some level of documented processes, adherence is poor. A correctly configured PLM system can fix this quickly.

Do you have three or more of these issues keeping you up at night? Time to take a serious look at a PLM system.

256px-caught_between_a_rock_and_a_hard_placeThere they were, sailing along their merry way. Toward the horizon, a narrow strait approaches. As the boat gets closer, they notice a couple of strange characteristics; to one side a cliff and the other a whirlpool. Upon arrival, it becomes apparent that this is the cliff where the monster Scylla dwells. Looking to the other side, the monster Charybdis, spewing out huge amounts of water, causing deadly whirlpools. Each monster is close enough that to avoid one means meeting the other. Determined to get through, our intrepid hero Ulysses must make a decision.  The idiom “Between Scylla and Charybdis” comes from this story.  In more modern terms, we would translate this to “the lesser of two evils.”

PLM administrators, engineering managers, and IT teams are often give this same choice with equally deadly – well, unfortunate – outcomes. What is this dilemma? Customize the PLM system (beyond mere configuration) to match company policies and processes, or change the culture to bend to the limitations posed by “out of the box” configurations.

Companies will often say something to the effect of “We need the system to do X.” To which many vendors meekly reply “Well, it can’t exactly do X, but it’s close.” So what is a decisionmaker to do? Trust that their organization can adapt? Risking lost productivity and possibly mutiny? Or respond by asking “What will it take to get it to do X?” incurring the risk of additional cost and implementation time.
source-code-583537_1280

We can further elaborate on the risks of each.  When initially developing the customizations, there is the risk of what I call “vision mismatch.”  To the best ability, X is described with a full understanding of the bigger picture that is missed when the developer writes up the specification.  This leads to multiple revisions of the code and frustrations on both sides of the table.  Then, customizations have the longer-term risk of “locking” into a specific version.  While gaining the benefits of keeping your processes perfectly intact, the system is stuck in time unless the customizations are upgraded in parallel.  Some companies will avoid that by never upgrading…until their hardware, operating systems, or underlying software systems become unsupported and obsolete. Then the whole thing can come to a crashing halt.  Hope the backups work!

office-1209640_1280However, not customizing has its own risks. What if the new PLM system is replacing an older “homegrown” system that automated some processes that the new system cannot? (And a “homegrown” system comes with its own set of risks; original coder leaves the company, never commented code, no specifications, etc.)  For example, raising an issue automatically created an engineering change request while starting a CAPA process. The company has gained a manual process, thus exposing them to human error. Or, perhaps the company has policy that requires change orders go through a “four-eyes” approval process, to which the new system has no mechanism to support such a use case.

Customizing is akin to Charybdis, whom Ulysses avoided, deciding that it is better to knowingly lose a few crew members rather than risk losing the entire ship to the whirlpools. Not customizing  is more like Scylla, where there is lower risk, though a much higher probability to the point of almost certainty.

We’ve been through these straits and lived.  We’ve gone through with many companies, from large multi-nationals to the proverbial “ma and pa” shops.  Let us help you navigate the dangers with our PLM Analytics benchmark.

Enterprise-wide PLMdata systems hold huge amounts of business data that can potentially be used to drive business decisions and effect process changes to generate added value for the organization.  Several PLM users are unaware of the existence of such valuable data, while for others, advanced data search and retrieval can feel like looking for a needle in a haystack due to their unfamiliarity with the PLM data model. Hence it is important to process the data into meaningful information and model it into actionable engineering knowledge that can drive business decisions for normal users. Reporting plays a key role in summarizing that large amount of data into a simple, usable format for the purpose of easy understanding.

Reporting starts with capturing the right data – the most important step and, many a time, the least stressed one. When data is not captured in the right format, it results in inconsistent or non-standard data.

Let’s take a simple Workflow process example: Workflow rejection comments are valuable information for companies to understand the repeated reasons for workflow rejection and to improve FTY (First time yield) by developing training plans to address them.  Users might not enter rejection comments unless they are made mandatory, so it’s important to have data-model checks and balances to capture the right data and standardize it through categorization and LOVs (List of values).

reportThe next step is to filter and present the right information to the right people. End Users typically want to run pre-designed reports and maybe slice or dice the data to understand it better. Business Intelligence Designers and Business analysts who understand the PLM Schema and their business relationships are the ones who design the report templates. Report design is sometimes perceived as an IT or software function, and as a result, enough business participation is not ensured, which can have an impact on the effectiveness of the reports for end users. It is important to have business participation from report identification to report design to report usage. Business process knowledge is the key in this area, not the PLM tool expertise alone.

Since business processes get improved/modified based on different market and performance trends derived from PLM reports, it’s important to have continuous improvement initiatives to fine-tune reporting based on these improved processes and new baselines, from data capture to presentation. That makes it a continuous cycle – business processes need to be designed to support reporting and reports need to help improve the process.

Properly designed reports provide increased visibility into shifting enterprise wide status, reduce time and cost for data analysis, ensure quicker response times and faster product launch cycles and improve product quality and integrity.

How do your reports measure up? Do you have any questions or thoughts? Leave a comment here or contact us if you’re feeling shy.

417px-the_tortoise_and_the_hare_-_project_gutenberg_etext_19994In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead.

— Aristotle, Physics VI:9, 239b15

This paradox, as first developed by Zeno, and later retold by Aristotle, shows us that mathematical theory can be disproved by taking the hypothesis to an absurd conclusion.  To look at it another way, consider this joke:

A mathematician and scientist are trapped in a burning room.

The mathematician says “We’re doomed! First we have to cover half the distance between where we are and the door, then half the distance that remains, then half of that distance, and so on. The series is infinite.  There’ll always be some finite distance between us and the door.”

The engineer starts to run and says “Well, I figure I can get close enough for all practical purposes.”

The principle here, as it relates to simulation like FEA, is that every incremental step taken in the simulation process gets us closer to our ultimate goal of understanding the exact behavior of the model under a given set of circumstances. However, there is a limit at which we have diminishing returns and a physical prototype must be built. This evolution of simulating our designs has saved a lot of money for manufacturers who, in the past, would have had to build numerous, iterative physical prototypes. This evolution of FEA reminds me of…

2000px-mori_uncanny_valley-svgThe uncanny valley is the idea that as a human representation (robot, wax figures, animations, 3D models, etc.) increases in human likeness, the more affinity people will have towards the representation. That is, however, until a certain point.  Once this threshold is crossed, our affinity for it drops off to the point of revulsion, as in the case of zombies, or the “intermediate human-likeness” prosthetic hands.  However, as the realism continues to increase, the affinity will, in turn, start to rise.

Personally, I find this fascinating – that a trend moving through time can abruptly change direction, and then, for some strange reason, the trend reverts to its original direction. Why does this happen? There are myriad speculations as to why in the Wikipedia page that I’ll encourage the reader to peruse at leisure.

elmer-pump-heatequationBut to tie this back to FEA, think of the beginning of the Uncanny Valley curve as the start of computer assisted design simulation. The horizontal axis is time, vertical axis is accuracy.  I posit that over time, as simulating software has improved, the accuracy of our simulations has also increased. As time has gone on, the ease of use has also improved, allowing non-doctorate holders to utilize simulation as part of their design process.

And this is where we see the uncanny valley; as good as the software is, there comes a point, if you use specialized, intricate, or non-standard analysis, where the accuracy of the software falters. This tells us that there will still be needs for those PhDs, and once they get on the design and start using the software, we see the accuracy go up exponentially.

If you need help getting to the door, or navigating the valley, talk to us about our Simulation benchmark process. Leave a comment or click to contact us.

 

With all the buzz about Additive Manufacturing, or 3D Printing, in the manufacturing world today, there is a lot of mystery and confusion surrounding the common practices and techniques. So, this week’s blog post will address a common type of 3D printing known as Selective Laser Sintering (SLS).

But first, What is Additive Manufacturing?

Additive manufacturing is the process of creating a part by laying down a series of successive cross-sections (a 2D “sliced” section of a part). It came into the manufacturing world about 35 years ago in the early 1980s, and was adapted more widely later in the decade. Another common term used to describe additive manufacturing is 3D Printing. A term which originally referred to a specific process, but is now used to describe all similar technologies.

Now that we’ve covered the basics of 3D Printing, What is Selective Laser Sintering?

It is actually part of a broader category, commonly referred to as a Granular Based Technique. All granular based additive manufacturing techniques start with a bed of a powdered material. A laser beam or bonding agent joins the material in a cross section of the part. Then the platform beneath the bed of material is lowered, and a fresh layer of material is brushed over the top of the cross section. The process is then repeated until a complete part is produced. The first commercialized technique of this category is known as Selective Laser Sintering.

The Selective Laser Sintering Technique was developed in the mid-1980s by Dr. Carl Deckard and Dr. Joseph Beaman and the University of Texas at Austin, under DARPA sponsorship. As a result of this, Deckard and Beaman established the DTM Corporation with the explicit purpose of manufacturing SLS machines.  And, in 2001, DTM was purchased by their largest competitor, 3D systems.

What Are the Advantages of this Process?

SLS is quick. It’s one of the fastest rapid prototyping techniques. Though, relatively speaking, most techniques are fast. SLS also has the widest array of usable materials. Theoretically, just about any powdered material can be used to produce parts. In addition, it can potentially be one of the most accurate rapid prototyping processes – the major limiting factor being the particle size of the powdered material.

Because parts are created in a bed of material, there is no need to use support structures like in other forms of rapid prototyping. This helps to avoid secondary operations and machining. Another advantage of the material bed is the ability to stack multiple parts into the build envelope. This can greatly increase the throughput of a SLS machine.

What Are the Disadvantages of this Process?

Of the commercially available rapid prototyping machines, those that use the Selective Laser Sintering technique tend to have the largest price tag. This is usually due to the scale production these machines are designed for, making them much larger than others.

SLS can be very messy. The material used is a bed of powdered material and, if not properly contained, will get EVERYWHERE. In addition, breathing in powdered metals and polymers can potentially be very hazardous to one’s health; though most machines account for this, it is certainly something to be cognizant of when manufacturing.

Unlike other manufacturing processes, SLS limits each part to a single material. This means parts printed on SLS machines will be limited to those with uniform material properties throughout.

As materials aren’t fully melted, full density parts are not created through this process. Thus, parts will be weaker than those created with traditional manufacturing processes, although full density parts can be created through similar manufacturing processes, such as SLM.

In Conclusion

There are quite a few different ways to 3D print a part, with unique advantages and disadvantages of each process. This post is part of a series, discussing the different techniques. Thanks for reading!

When we talk with customers that may have a need to enhance their PLM technology or methods, there are commonly two different schools of thought regarding the subject.  Generally companies start the conversation with one of two different focuses: either CAD-focused or process-focused.

CAD-centric companies are the ones who rely heavily on design and engineering work to support their business.  They generate a lot of CAD data, and eventually this CAD data becomes a real pain to manage effectively with manual processes.  Things get lost, data is hard to locate, design reuse is only marginally successful, and the release process has a lower level of confidence.  These companies usually start thinking about PLM because they need to get their CAD data under control.  They usually start PLM with a minimal approach that is just sufficient to tackle the obvious problem of CAD data management.  Sometimes other areas of PLM are discussed, but are “planned” for a later phase, which inevitably turns into a “much later” phase which still hasn’t happened. What they have done is grease the squeaky wheel while ignoring the corroding frame that is potentially a much bigger problem. CAD-centric companies often benefit from taking step back to look at their processes; many times they will find that is where the biggest problems lie.

BOMs are often associated with CAD geometry, but many times this isn't the case.

BOMs are often associated with CAD geometry, but many times this isn’t the case.

Companies that don’t deal with a lot of CAD data can often realize the benefits of PLM from a process improvement perspective. Product introductions, project management, BOM management, customer requirements, change management, and quality management are just some areas that PLM can help improve. Many process-focused companies already have systems in place to address these topics, but they are often not optimized, and usually not connected.  They tend to be their own individual silos of work or information, which slows the overall “get to market” process, and reduces the overall effectiveness of the business.  These companies might not have the obvious “squeaky wheel” of CAD to manage, but they have PLM challenges just the same.  The key to improvement with them is to identify the challenges and actually do something about them.

In either case, Tata Technologies has the people and processes to help identify and quantify your company’s biggest challenges through our PLM Analytics process.  This process was developed specifically to address the challenges companies have in identifying and quantifying areas for PLM improvement.  If you’re interested in better identifying areas of improvement for your company’s PLM process, just let us know.  We’re here to help.

 

With all the buzz about Additive Manufacturing, or 3D Printing, in the manufacturing world today, there is a lot of mystery and confusion surrounding the common practices and techniques. So, this week’s blog post will address a common type of 3D printing known as Stereolithography.

But first, What is Additive Manufacturing?

Additive manufacturing is the process of creating a part by laying down a series of successive cross-sections (a 2D “sliced” section of a part). This technology came into the manufacturing world about 35 years ago in the early 1980s, and was adapted more widely later in the decade. A more common term used to describe additive manufacturing is 3D Printing – a term which originally referred to a specific process, but is now used to describe all similar technologies.

Now that we’ve covered the basics of 3D Printing, What is Stereolithography?

Stereolithography is the process of building an object by curing layers of a photopolymer, which is a polymer that changes properties when exposed to light (usually ultraviolet light). Typically this causes the material to solidify, or cure.

This technique uses a bath or vat of material. An Ultraviolet Laser will cure a layer of photopolymer on a platform. The platform is then lowered into the bath, and another layer of material is cured over the top of it.

A variation on this technique, referred to as Poly or Multi-Jet printing, has a slight modification to the process. Instead of using a bath of material, Jet printing uses separate reservoirs of material, which are fed through a UV laser. The material reservoirs in this process are quite similar to inkjet printer cartridges, and function similarly to an inkjet printer. This technique was developed by Objet Technologies, which was acquired by Stratasys in 2012.

What Are the Advantages of this Process?

Stereolithography is fast. Working prototypes can easily be manufactured within a short period of time. This, however, is greatly dependent on the overall size of the part.

SLA is one of the most common rapid prototyping techniques used today. It has been widely adopted by a large variety of industries, from medical, to automotive, to consumer products.

The SLA process allows for multiple materials to be used on one part. This means that a single part can have many several different structural characteristics and colors, depending on where material is deposited. In addition, all of the materials used in SLA are cured through the same process. This allows for materials to be blended during manufacturing, which can be used to create custom structural characteristics. It should be noted, however, that this is only available with either the MultiJet or PolyJet SLA machines.

Of the all the technologies available, SLA is considered to be the most accurate. Capable of holding tolerances under 20 microns, accuracy is one of the largest benefits to this technique.

What Are the Disadvantages of this Process?

Historically, due to the specialized nature of the photopolymers used in this process, material costs were very high compared to other prototyping processes.  They could be anywhere from $80 to over $200 per pound. The cost of a machine is considerably large as well, ranging anywhere from $10k to well over $100k. Though recently, a renewed interest in the technology has introduced more consumer grade SLA machines, which has helped to drive down prices. New material manufacturers have also appeared in recent years (spot-A Materials and MakerJuice Labs), which has cut prices drastically.

Stereolithography is a process that requires the use of a support structure. This means that any part produced with this technique will require a secondary operation post-fabrication.

In Conclusion

There are quite a few different ways to 3D print a part, with unique advantages and disadvantages of each process. This post is the first part of a series, discussing the different techniques. Thanks for reading!

© Tata Technologies 2009-2015. All rights reserved.