Posts Tagged "PLM"

Firstly, credit where credit is due. This article was inspired by fellow PLM soldier Rob Ferrone who includes the title “Digital Plumber” in his LinkedIn profile.

How does the plumbing analogy fits into PLM? A generalized definition of plumbing from Wikipedia:

“Plumbing is any system that conveys fluids for a wide range of applications” (Interestingly derived from the Latin word plumbum or lead – these were the first pipes used for water delivery).

Of course, the term has subsequently been extended to cover any system that facilitates the flow of fluids, information, air etc. For example, people talk about “a problem with the plumbing of my heart” when referring to arterial blockages. Or “Our network plumbing is inefficient” meaning that there are bottlenecks in the passage of data in the network.

So, how does this extended definition apply to PLM and why is it important to pay attention to “PLM Plumbing”. Let me try my own definition:

PLM Plumbing – The art and science of optimizing background systems to ensure that information, processes and workflows in a PLM system flow efficiently as possible for a user.

It is important to split this definition into two parts because really, we are talking about two different participants in a PLM system. Let’s examine the sides below.

The User

What does a user want out of a PLM system? Here is a partial list of features or behaviors that are likely to result in great User adoption:

  • Intuitive user interface
  • Minimal button clicks
  • Fresh, modern look
  • Easy access to help screens, tooltips and other aides
  • Presentation of the right information
  • Easy “drill down capabilities” for looking up details
  • Dashboards and reports
  • Responsive screens (no “slow load”)
  • Use on any device
  • Work anywhere anytime
  • Accessible and intelligent search capability (users compare to Google)
  • Simple and easy to understand workflows
  • Simple process to create objects with uncluttered forms

The list above is obviously incomplete and high level. Also, many of the requirements are functions of the underlying software and its design, and cannot be changed easily, certainly not at a specific implementation.

The PLM Plumber

So, what are PLM Plumbers? They are the heroes of a PLM Implementation, constantly working in the background to provide Users with all their requirements. They take care of the “plumbing”; all the background setup, infrastructure and configurations that a User does not want to deal with but allows for efficient data flow.

Here are a few areas where an expert PLM Plumber can have an impact:

Infrastructure Design

A crucial part of any PLM implementation is ensuring that a robust infrastructure supports the system. Not designing this correctly can have implications for user performance and hence acceptance. Another complete blog could be written about this subject but, suffice to say the PLM plumber is an expert at these matters – this is the essence of plumbing. (Just a word of warning, don’t create something analogous to the title picture of this article)

Device Availability

It is a fact of modern life that everybody expects to be able to conduct business anywhere, anytime. More and more business interactions are migrating to smart devices – should not PLM? The system can be configured and enabled to allow for this. In terms of the plumbing, this may entail some behind the scenes IT systems (VPN, Mobile web pages, device security etc.). Include in implementations!

Workflows and processes

Often committees design convoluted and complicated workflows that are then implemented in a PLM system. The result of this is that Users do not understand them, are frustrated when they don’t work and make multiple mistakes. It is the duty of the PLM Plumber to simplify and streamline workflows and processes – Users will appreciate this. And, reduce the number of clicks to approve a workflow.

User Entry Forms

Another irritation to Users is the large forms that needs to be completed when creating an object in PLM. Multiple fields with lots of information are presented. The average user completes the mandatory fields only (those with *) and hits create. So why have the non-mandatory fields in the first place? It is the duty of the PLM Plumber to fight against unnecessary fields.

Organization and Security

This is an important part of any PLM Implementation and one where the PLM Plumber can have a large impact because most PLM systems must be configured from scratch when it comes to the security model. It should be noted that nothing is more annoying to a user than to get a message such as “the user is not authorized for this function”. Not only is it a blow to their ego, it also is one of the events guaranteed to result in non-adoption. Of course, there are good reasons to have security in place, one must prevent unauthorized actions in the system. This is where the PLM plumber comes in by designing a good security policy that balances corporate policies with User convenience. The actual design is transparent to the User, they want to get their job done.

Reports and Dashboards

All Users want certain information from the system that will allow them to effectively do their job. Each User may require a different set of information, presented in a certain manner. Here is where the PLM Plumber can become very useful by configuring the correct reports and dashboards for Users. In general, Users don’t want to be bothered with this task – they want to see the results.

The Art

Note that my original definition included the word “art”. This may seem an overreach when discussing a technology such as PLM, but it can be defended. For example, faced with trying to balance complexity and simplicity in a workflow the PLM Plumber applies his art and produces an elegant result. Consider also Reports and Dashboards – is not the correct design of a sleek dashboard an art?

Conclusion

Here is to Plumbing in PLM. Just as Plumbers create complex, background, invisible systems that deliver fluids, so does a PLM Plumber deliver information to Users in an efficient manner. Embrace your destiny!

Visit www.tatatechnologies.com to learn more about our PLM offerings and how we can help customers use the best technology for their needs.

More often PLM starts as a CAD/Design data vault for many companies, later evolving to a design data exchange platform .  Most successful companies are taking PLM beyond just a design data exchange and access control platform; to a knowledge driven decision support system.  This means PLM not only needs to manage the multitude of information generated at various stages of the product lifecycle , but also capture the product development knowledge and feed it back to the product lifecyccle. For example, the requirements and design for a newer version of a product  needs to be also driven by the knowledge elements captured from the previous version’s lefecycle, from inception to design to manufacturing and service.

When PLM stays just in the Design Engineering world, it’s constrained to exchange information and capture knowledge from downstream stages managed by disconnected, silo based systems. This results in engineers spending huge amount of time in data acquisition tasks. Industry studies shows that information workers spend 30-40% of their time only for information gathering and analysis, thus wasting time in searching for nonexistent information, failing to find existing information, validating the information or recreating information that can’t be found.

Quality escapes is another challenge with such disconnected systems when product doesn’t confirm with the engineering definition. Non-conformances found on the shop floor  are costly to review and dispose and even more severe when the product is already on service. Reconciling change is also extremely challenging, especially its downstream propagation, resulting in significant productivity losses. Slow change processing along with quality escapes cause delays in new product introduction affecting the overall ability of the companies to compete.

The first step towards transforming PLM to a true knowledge driven decision support system is to extend it to the CAD/CAM/CNC process chain, thus taking it to the shopfloors. Such a solution helps to establish a  continuous loop from Engineering into the shop floor for operations management and manufacturing execution systems (MES). Such a continuous loop system provide more ways to capture the business intelligence and then suggest solutions based on the previous patterns. Then it’s much easier to capture information and use analytics to synthesize valuable knowledge elements compared to the fragmented solutions many companies have today.  It’s also a foundational element for establishing a Digital Twin per Industry 4.0 vision

 

Other key benefits of extending PLM to manufacturing include

Reducing the time to market

  • Enhanced collaboration between Product and Manufacturing Engineering
  • Enhanced Traceability and Faster Change Management

Enhancing Flexibility

  • Manufacturing plans comprehend product variability/complexity
  • “What if” scenarios for optimized decision making

Increasing Quality

  • Manufacturing Simulation and validation integrated in PLM
  • Up-to-date 3D work instructions delivered to the shop floor

Increasing Efficiency

  • Ongoing process optimization based on Closed loop feedback of utilization data
  • Reuse of common methods/tooling

Robots, automation, and what some would even call “artificial intelligence” are everywhere around us today.  From factories to automobiles; our smart phones to our refrigerators; digital life today has had a profound impact over recent decades.  This doesn’t appear to be slowing down anytime soon either.

Automation and robots in factories have replaced many of the assembly line jobs, and this has also lead to a great improvement in quality and efficiency.  Instead of assembly line workers, we now have robot technicians and programmers.

The big question is “Why?”.  And the answer is generally along the lines of improved efficiency due the need to stay competitive.

It could be argued that we are seeing the same thing in the engineering environment with design automation and the associated engineering intelligence built into repeatable design types… or automated workflows that are electronically managed to ensure process repeatability, quality and efficiency.

This begs the question:  Can companies survive in today’s market without investing in their business and engineering processes like they also have or must do in manufacturing?

This is a further  followup to my previous articles on digital twins, focusing on the Feedback Loop pillar

Smart Factory loop
The feedback loop starts with the Smart Factory. This is a fully digitalized factory model of a production system connected via sensors, SCADA systems, PLCs or other automation devices to the main product lifecycle management (PLM) data repository. In the Smart Factory, all events on the physical shop floor during production are recorded and directly pushed back to the PLM system or through the cloud. Artificial intelligence (AI) technology is used to study and analyze this information, and the main findings are sent back to either product development
in manufacturing planning or facility planning.
Why is this important? Production facilities and the manufacturing processes tends to change immediately after start of production. New ideas will be implemented, new working methods will be deployed and new suppliers might be selected; all requiring changes to the production system or process. Since these modifications will certainly impact the future, updating them in the system at this stage is becoming a must. Production systems outlive the product lifecycle, and many companies use their production systems to make multiple products. These factors contribute to the increasing need to regularly capture these changes in the PLM system, which can later be used to distribute this information to all parties. The information collected during production can also serve as the basis for improving the maintainability of manufacturing resources. With this information, we can enable much better (sensor) condition-based maintenance, and thus increase uptime and productivity.

Smart product loop
Almost every product made today is a smart product. Many companies are looking for ways to improve the connection with their smart products while they are being used by their customers. Monitoring product use can provide a lot of knowledge for improving products. More than that, connecting to these smart products can generate a new type of business model that may result in more competitive offerings.

PLM Challenge

These feedback loops and the data it generate is a challenge for PLM too. In the short term, the PLM issue for digital twins is how IoT-gathered data can best be put to work—extrapolated, parsed, and redirected? To where? At whose direction? The quick and easy solutions are analytics running on the cloud, machine-to-machine (M2M), and analyses based on Artificial Intelligence (AI). Such questions are expected as digital twins emerge as the next revolution in both data management and lifecycle management.
Ultimately, the use of PLM will allow us to bring digital twins into close correspondence—in sync—with their physical equivalents in the real world. When this comes to pass, we can expect problems to be uncovered more quickly, products to be supported. Products with digital twins will be more reliable with less downtime while operating more efficiently and at lower cost. PLM-powered digital twins will boost user and owner confidence in their physical products. Ultimately, digital twins reflect what users and owners expect to receive when they sign a contract or purchase order.

Standing on the beach, overlooking the bountiful, yet imperfect, harvest, he pondered the situation in front of him. “Why are all of my troop mates eating these sand-covered sweet potatoes? In the beginning, they were delicious…and without the sand. Now? these wonderful treats are all but inedible. What if I…

This is the beginning of tale based on a scientific research project, though may have evolved into something of an urban legend. The idea is that scientists in Japan, circa 1952, were studying the behaviors of an island full of macaque monkeys. At first, the scientists gave the monkeys sweet potatoes. After a period of time, the scientists then started covering the sweet potatoes in sand to observe how they would react. Not surprisingly, the monkeys still ate the treats, however begrudgingly. Then, the story goes, a young monkey took the vegetable to the water and washed it off. He discovered that it tasted as good as it did before the sand. Excitedly the young monkey showed this discovery to his mother. Approvingly, his mother began washing hers in the water as well.

Still, the vast majority still went on, crunching away on their gritty meals. Over time, a few more monkeys caught on. It wasn’t until a magic number of monkeys were doing this – we’ll say the 100th – that seemingly the entire troop of monkeys began rinsing their sweet potatoes off in the water.

Call it what you will – social validation, the tipping point, the 100th monkey effect, etc. It all comes down the idea that we may not try something new, however potentially beneficial, until it’s “OK” to do so. Cloud solutions for PLM could be coming to that point.  These products have been in the market for a few years now, and they mature with every update (and no upgrade headaches, either).

In the near future, it is forecasted that “Within the next three years, organizations have the largest plans to move data storage/data management (43%) and business/data analytics (43%) to the cloud,” as reported by IDG Enterprise in their “2016 IDG Enterprise Cloud Computing Survey.”  Another survey, “2017 State of the Cloud Survey” by Rightscale, is seeing that overall challenges to adopting cloud services have declined. One of the most important matters, security, has fallen from 29% of respondents reporting it as a concern to 25%. Security is still a valid concern, though I think the market is starting to trust the cloud more and more.

With our experience and expertise with PLM solutions in the cloud, Tata Technologies can help you chose if, when, and how a cloud solution could be right for your company. Let us know how we can help.

My last post outlined on deriving more value out of PLM data through reports. The complexity of data in the engineering environment is skyrocketing, and Teamcenter as a PLM system provides advanced reporting capabilities for enterprise data, including the data managed in external systems like MRP & ERP.

The Teamcenter Report Builder application provides basic reporting capabilities for data managed inside Teamcenter. It supports two kinds of reports:

  1. Summary Reports
  • Reports that summarize similar information – forreportbuilder example, reports that show all the employees, all the items belonging to a user, or the release status of items
  • Context Independent reports – no object selection required
  • Generated from Teamcenter saved queries
  1. Item Reports
  • Reports that can be run on a particular object – for example, BOM or workflow information for a given object
  • Executed in the context of one or more objects

Behind the scenes, Report Builder uses Teamcenter queries based data dump and supports output to common formats like Excel, XML, Text and HTML.  It’s easy to build these simple reports based on Teamcenter queries, and they can be run from both rich client and Active Workspace client.  Excel can be further leveraged for complex processing, charting, and aggregation of the output.

tcraThe Teamcenter Reporting & Analytics module provides additional advanced reporting capabilities. It can summarize information and present data from many sources in a single report using an easy to build, configurable, drag and drop layout.

It can leverage standard formatting tools like headers/footers, dates, page numbers, report names, filters, tables, charts, and elements. Reports can be run from both Active Workspace or Teamcenter Reporting & Analytics client. It has business intelligence designed for Teamcenter and to understand the relationships and associations of PLM information.tcra1 It comes with over 100 out-of-the-box reports in areas like Change Management, BOM Reports, Substance compliance, Workflow, Administrator Reports, Verification Management, PMM, Schedule Manager, Requirements Manager. It supports powerful and fast BOM reporting, project planning and status reporting and dashboards, process and change reporting.

It has direct access to Teamcenter data through APIs and has connectors to standard enterprise applications. It can also enforce data security based on the Teamcenter access model.  Additional capabilities include:

  • Customized Analytics
    • Organization-specific process status metrics and KPIs
    • Multi-level root-cause analysis
    • Mean time between failure / to failure (MTBF, MTTF) analysis
    • Historical Performance Analysis
  • Reporting Control
    • Save Snapshots of pre-defined reports
    • Group/Role based Access to report data
    • User Controlled Conditional Formatting
  • Resource Management
    • Automated Report Scheduler and Delivery
    • Submit Analysis to queue for load management
    • Caching Techniques to reuse data cubes

Teamcenter Reporting & Analytics benefits include:

  • Analytics, Dashboards and Traditional Reporting – understand your data to improve your products and processes
  • Time to Value – start with pre-configured reports and enable custom reports for your business in a couple of weeks, not months or years
  • Designed for Teamcenter – enable your entire enterprise to easily understand the information they require to make better decisions
  • Self Service Analytics – enable data discovery through self service analytics designed for Teamcenter and optimized to your needs

256px-caught_between_a_rock_and_a_hard_placeThere they were, sailing along their merry way. Toward the horizon, a narrow strait approaches. As the boat gets closer, they notice a couple of strange characteristics; to one side a cliff and the other a whirlpool. Upon arrival, it becomes apparent that this is the cliff where the monster Scylla dwells. Looking to the other side, the monster Charybdis, spewing out huge amounts of water, causing deadly whirlpools. Each monster is close enough that to avoid one means meeting the other. Determined to get through, our intrepid hero Ulysses must make a decision.  The idiom “Between Scylla and Charybdis” comes from this story.  In more modern terms, we would translate this to “the lesser of two evils.”

PLM administrators, engineering managers, and IT teams are often give this same choice with equally deadly – well, unfortunate – outcomes. What is this dilemma? Customize the PLM system (beyond mere configuration) to match company policies and processes, or change the culture to bend to the limitations posed by “out of the box” configurations.

Companies will often say something to the effect of “We need the system to do X.” To which many vendors meekly reply “Well, it can’t exactly do X, but it’s close.” So what is a decisionmaker to do? Trust that their organization can adapt? Risking lost productivity and possibly mutiny? Or respond by asking “What will it take to get it to do X?” incurring the risk of additional cost and implementation time.
source-code-583537_1280

We can further elaborate on the risks of each.  When initially developing the customizations, there is the risk of what I call “vision mismatch.”  To the best ability, X is described with a full understanding of the bigger picture that is missed when the developer writes up the specification.  This leads to multiple revisions of the code and frustrations on both sides of the table.  Then, customizations have the longer-term risk of “locking” into a specific version.  While gaining the benefits of keeping your processes perfectly intact, the system is stuck in time unless the customizations are upgraded in parallel.  Some companies will avoid that by never upgrading…until their hardware, operating systems, or underlying software systems become unsupported and obsolete. Then the whole thing can come to a crashing halt.  Hope the backups work!

office-1209640_1280However, not customizing has its own risks. What if the new PLM system is replacing an older “homegrown” system that automated some processes that the new system cannot? (And a “homegrown” system comes with its own set of risks; original coder leaves the company, never commented code, no specifications, etc.)  For example, raising an issue automatically created an engineering change request while starting a CAPA process. The company has gained a manual process, thus exposing them to human error. Or, perhaps the company has policy that requires change orders go through a “four-eyes” approval process, to which the new system has no mechanism to support such a use case.

Customizing is akin to Charybdis, whom Ulysses avoided, deciding that it is better to knowingly lose a few crew members rather than risk losing the entire ship to the whirlpools. Not customizing  is more like Scylla, where there is lower risk, though a much higher probability to the point of almost certainty.

We’ve been through these straits and lived.  We’ve gone through with many companies, from large multi-nationals to the proverbial “ma and pa” shops.  Let us help you navigate the dangers with our PLM Analytics benchmark.

Enterprise-wide PLMdata systems hold huge amounts of business data that can potentially be used to drive business decisions and effect process changes to generate added value for the organization.  Several PLM users are unaware of the existence of such valuable data, while for others, advanced data search and retrieval can feel like looking for a needle in a haystack due to their unfamiliarity with the PLM data model. Hence it is important to process the data into meaningful information and model it into actionable engineering knowledge that can drive business decisions for normal users. Reporting plays a key role in summarizing that large amount of data into a simple, usable format for the purpose of easy understanding.

Reporting starts with capturing the right data – the most important step and, many a time, the least stressed one. When data is not captured in the right format, it results in inconsistent or non-standard data.

Let’s take a simple Workflow process example: Workflow rejection comments are valuable information for companies to understand the repeated reasons for workflow rejection and to improve FTY (First time yield) by developing training plans to address them.  Users might not enter rejection comments unless they are made mandatory, so it’s important to have data-model checks and balances to capture the right data and standardize it through categorization and LOVs (List of values).

reportThe next step is to filter and present the right information to the right people. End Users typically want to run pre-designed reports and maybe slice or dice the data to understand it better. Business Intelligence Designers and Business analysts who understand the PLM Schema and their business relationships are the ones who design the report templates. Report design is sometimes perceived as an IT or software function, and as a result, enough business participation is not ensured, which can have an impact on the effectiveness of the reports for end users. It is important to have business participation from report identification to report design to report usage. Business process knowledge is the key in this area, not the PLM tool expertise alone.

Since business processes get improved/modified based on different market and performance trends derived from PLM reports, it’s important to have continuous improvement initiatives to fine-tune reporting based on these improved processes and new baselines, from data capture to presentation. That makes it a continuous cycle – business processes need to be designed to support reporting and reports need to help improve the process.

Properly designed reports provide increased visibility into shifting enterprise wide status, reduce time and cost for data analysis, ensure quicker response times and faster product launch cycles and improve product quality and integrity.

How do your reports measure up? Do you have any questions or thoughts? Leave a comment here or contact us if you’re feeling shy.

ilogic-snipSometimes CAD can be used to start establishing PLM practices. Since PLM systems rely on data to be effective, ensuring consistent and correctly-entered information is paramount. Things like classification with properties and meta-data can rely on CAD very heavily to be effectively used. For example, let’s consider the classification and data for a machined part. If the part is going to require machining, we could assign it a classification of “Machined.” Since the part is going to be machined, we would want to ensure that “Stock Size” is one piece of meta-data to be tracked. Most CAD systems have a way to ensure this “Stock Size” is at least filled out, and some could even be automated to calculate the stock size without any user intervention. Of course a repeatable logic would need to be utilized, but once that is done, time spent completing stock size calculations and potential errors would be eliminated.

 

Case in point: Utilize iLogic in Autodesk Inventor to calculate stock size for machined parts. Once this is done, users can forget about manually checking all the measurements; all they need to do is flag the part as “Machined” and the system does the rest!

boltvolume

© Tata Technologies 2009-2015. All rights reserved.