Category "Product Lifecycle Management"

This quote embodies a phenomenon in today’s entertainment media culture: the Alternate Reality Game, or ARG. These ARGs aim to bring the audience deeper into the product experience, to rev up the hype. The game’s producers build an elaborate web of interconnected “plot points,” for lack of a better term. These plot points are not overt, not communicated outright. The audience has to dig for it by picking up clues and solving puzzles that lead to the next, and so on.

A recent entry into the world of ARGs is Blizzard Entertainment’s new game, Overwatch – a game whose players are very interested in information about the next character planned for release. It started with a list of character names on a piece of paper seen in the game, with one exception. That started the rumors that this could be the next character to be revealed. Next, Blizzard released a couple of videos on YouTube where a seemingly innocuous blip (below left), actually turned out to be an image; the colors had to be enhanced to reveal a series of numbers (below right). A member of the community converted the hex values to ASCII, and XOR’ed them with the number 23, converting the result back to hexadecimal and mapping to readable letters…which turned out to be a phrase written in Spanish.

combined

IFphkqCAnother clue puzzle was solved when a similar strange anomaly appeared in another video. It was a series of images that looked like the color patterns you used to see on a TV station late at night after they stopped broadcasting. One of the other images was a series of horizontal black and white lines. One player took those lines, turned them 90 degrees, and read them via a bar code reader. The result was, of course, more hex values, which were converted into binary. Through some form of applied computer science, taking the binary code into pixels where 1s were black and 0s were white ultimately revealed a QR code. What did the code reveal? “Was that easy? Now that I have your attention, let’s make things more difficult.” As of the writing of this blog, the ARG is still alive and well, with more pieces being revealed regularly.

Where Does PLM Come In?

09 0C 0B 0E 0B 17 04 12 19 0B 11 06 19 03 12 17 18 03 02 19 18 18 15 04 05So why is this story in a PLM Insights blog? Well, I’ve seen many companies treat their design and engineering data like an ARG – meaning that lots of people are interested in it, it’s all over the place, and only a few (smart and creative) people really know how to find it. Whether it’s the “smart” part naming scheme from the late 80s that needs some kind of secret decoder ring, or the folder naming conventions where names serve as some sort of obscure meta data for the files contained in them.

An example part file (the names and numbers have been changed to protect the innocent):

S:\22648 COMP-LOGISTICS TAILCROSSER BG-F813\Units Designed By ABC\22648-9399 OP 30 Backup ASM Fixture reworked (same as 22648-9899) See XYZ Folder for new Op80 design\OP80 reworked 4-16-08 same as 22648-9899\Libraries\OP30\Purchase\Stndrd\fd65645_2

Here we have used just about every bit of the Windows character limit (which is 260, for those interested: 3 for the volume designation, 256 for path and file name with extension, and a null terminating character). Anyone that can manage files this way is clearly a talented individual. Much more impressive is that they were part of a 23-person design team that did all of their projects this way. I couldn’t imagine the frustrations of their new hires trying to find anything in this kind of environment.

The benefits of searching for data are pretty clear (see Google). Yet to this day, companies are still using antiquated methods because they think it’s “cheaper” than implementing a PLM system. Our PLM Analytics benchmark and impact analyses have proven otherwise, and that doesn’t include the myriad other benefits a PLM system offers. Let us know if you’re done playing the engineering and design ARG!

FYI, there is no ARG in this post…or is there?

Among technology practitioners, there is no shortage of pundits offering predictions into the future and where the next big wave is going to hit. The reason for this is that the stakes are high – a correct forecast of future technology trends can literally be worth billions.

So what are the current predictions talking about? Here is a sampling of the current buzz:

  1. Big Data
  2. Social Media
  3. Crowd Sourcing
  4. Social Computing
  5. Mobile Connectivity

So how does this impact PLM? Traditionally PLM is conducted on internal infrastructure in secured environments using traditional devices. For example, an average designer concerned with the creation of a 3D CAD data would be working on a company workstation behind a firewall. Equally, an engineer creating BOM data would be using a secured client installed on his company laptop. The possibility exists that the engineer may take his laptop home and interact with the PLM system via a VPN but this is probably the extent of “mobility.”

Returning to the technology buzz, consider the potential impact of two trends – mobile connectivity and social computing. Consider the following scenarios:

  1. Your newly recruited engineer has transitioned his digital life to his tablet and no longer uses a laptop. (hence the title of this piece)
  2. The VP of Engineering wants to query the status of his product introduction using his mobile phone.
  3. Your company wants immediate access to customer feedback on existing products so that this can be translated into requirements for new or updated designs.

Given the traditional model sketched our earlier, implementing anything close to these scenarios is almost impossible. The infrastructure, mindset, and processes will not support mobile connectivity from alternative devices nor allow general access to a requirements gathering front end. Also, it raises a whole lot of questions around data security, use of private devices, and non-company access. While the technology to achieve these scenarios probably exists, it would require considerable financial and effort investment to make it happen.

This leads to the fundamental risk investment equation. It may be possible to construct a business case that justifies the outlay. At a high level, two possibilities exist:

  1. Traditional PLM infrastructure is good enough for at least the next ten years and can be re-evaluated then.
  2. Changing the way business is conducted is a do or die activity and this includes PLM.

An informal survey of small to medium size companies shows that most participants have not even considered these technology trends. In part, there appears to be no business imperative and in part because there are other more attractive avenues for immediate investment.

So, do you want your engineers to be doing all their engineering work on a tablet anywhere in the world?

You have a PLM system. Fundamental to this system is the concept of a version and a revision. However, this is probably the most misunderstood process in the PLM realm. Also these terms mean a wide variety of things to different people and are often used interchangeably and without consistency.

For the purposes of the rest of this piece, we will use the following definitions:

Version – represents a small incremental change in the design that would be saved in the database. Versions are not necessarily saved permanently beyond a revision.

Revision – represents a significant event in the design process and is saved permanently in the database for reference throughout the design process.

Diagrammatically, the difference is illustrated below:

Version Revision

It is often confusing to talk to about this subject because the terms are used interchangeably. Also, the distinction between a version and a revision is not clearly understood, even to the extent that participants think that they are the same thing. Because of this, it is important that any organization with a PLM system ensure that all the participants clearly understand the definition and what the difference is.

In a collaborative PLM environment, participants are very dependent on viewing or using data generated by other participants. For example, a headlamp engineer needs the position of locating holes in the sheetmetal to be able to design his locating pins (if this is the order of precedence). In this scenario, the headlamp engineer will say “I need the latest sheetmetal to begin my design.” This statement is common in design and engineering teams. However, it is inherently imprecise because it begs the question: Do you need the latest version or latest revision?

Based on the definition given earlier, what is really required is the latest revision. A version is a work in progress and could be incomplete or half-done because the responsible author may be in the middle of a redesign or new concept. For this reason, a version should not be visible to the larger organization; only revisions should be accessible, as they satisfy the definition of “best so far.” This concept is very difficult to get across to a lot of people and represents the conundrum referred to in the title. It takes some courage to work on data that will change sometime in the future, but this is absolutely required in an efficient design process.

The version/revision conundrum also leads to some interesting human psychology. Consider any collaborative design environment where multiple participants have to submit data into a PLM system to progress a large project. It is important in these environments that all participants follow the mantra of “publish early, publish often” or, per the nomenclature of this piece, create many revisions. This is based on the principle that incomplete or slightly inaccurate data is better than no data at all.

However, process managers often put in systems that highlight inaccuracies or incomplete data, effectively punishing early publishers. So data authors hold back and only create revisions when they are certain of accuracy, late in the process. This is counterproductive.

So pay attention to the version/revision conundrum; clear definitions and policies of this simple issue can greatly improve a PLM process.

 

Fundamental to any PLM system is the idea of Access Control and data security. Only authorized personnel can access a PLM system and view or manipulate its contents. This is controlled via a login procedure that includes a user password. Personnel are added to the list of authorized users by the PLM administrator after someone has approved of their specific access rights.

Once access has been granted to users, it must then be determined what operations they can carry out on the PLM system. The simplest (and default) security model which allows all users to carry out any operation is very undesirable and could lead to actions that can destroy or leak vital data.

This scenario requires the development of a Security model which determines which user can carry out which operations. Security models are normally based on two concepts:

  1. Roles
  2. Organizations

A role in the database would define what the user who is assigned that role is allowed to do. Typical roles are as follows:

  1. Viewer – this role would be allowed to view data but not make any alterations or modifications
  2. Team Member – this role would be allowed to alter and update a limited subset of the data along with being able to carry out certain operations (e.g. initiate a workflow)
  3. Team Leader – this role would be able to do everything that a Team Member could do along with the ability to operate on a larger subset of data and carry out more operations (e.g. progress a workflow, change ownership)
  4. Approver – this role would be able to approve certain operations on the data (e.g. approve a release of information)
  5. Database Admin – normally limited to a handful of technically qualified people

Once roles in a database have been defined, the organizations are put in place. These normally mirror actual organizational structure, although this is not a necessity. Organizations in a PLM system usually work on specific projects or programs. Once the organization is defined, users are allocated to various organizations and are assigned specific roles.

The final result can be represented in a table as follows:

Within Organization Outside Organization
User Role View Modify Approve View Modify  
John Doe Team Leader Y Y N Y N
Paul Revere Team Member Y Y N N N
David Earp Approver Y N Y Y N

So how is security set up in your PLM system? Are all the security capabilities been used to ensure that no intellectual property is destroyed or leaked?

Back in the day…

There it was, one of the first internet communities, Usenet, about to undergo a sea-change unlike any it had seen before. It was 1993, September, a month that would never end.

IT - Ethernet Cable outletIt started much like the years had before; an influx of new people coming into the universities, getting online for the first time. The community absorbed them in much the same manner as they had in the past. These first-timers were indoctrinated with the well-established etiquette and protocols that were required to thrive in this brave new world.

It seems archaic now, but back then, in the “before times”, there was no way for mass discussion; social media had not yet been born.

The plot twist

And then it happened. AOL, then a name synonymous with the internet, decided to grant access to Usenet for all of its customers. Picture the mobs that gather outside department stores the morning after Thanksgiving: the unlocking of the door let loose a mass of people that overwhelmed the community. There were just not enough graceful souls able to help coach these new users in “civilized” net behavior. Social norms were thrashed; standards went out the window. It was the equivalent of the wild, wild west. In a word, it was chaos.

Future looking

Misc-Walking-peopleNow think of how you on-board new designers or engineers. You show them who’s helpful and who to avoid. You show them around, pointing out places of interest, teach them company standards, design methodologies, workflow processes, etc. Over the coming decade (to be exact, 2014 through 2024), according to stats provided by the Bureau of Labor Statistics (BLS), the Architecture and Engineering field will grow an average of 3.4%, or about 710,000 jobs.

The biggest (projected) job gainers:

  • Civil – 106,700
  • Mechanical – 102,500
  • Industrial – 72,800
  • Electrical – 41,100

Manufacturing - SuspensionCouple this with the BLS projection of labor force participation over the same time period where we’ll see a 1:1.3 ratio of people leaving the work force to people entering. That will be a lot of churn, meaning a lot of people to on-board. The products will be ever more complicated, and the enabling technology will be as well. Technology is cited as one of the reasons the field isn’t growing as fast as other areas.  The productivity gains in PLM are making companies more efficient, even as the complexity grows.

Conclusion

Business - Chess pawn inverseCompanies will need a strategy for managing changes in their employee base as well as the technology evolution. We offer a series of benchmarking and analysis services called PLM Analytics, and there is one specifically aimed at this issue called PLM Support. Let us know if we can help solve your Eternal September.

Any organization managing product introduction must have an underlying project plan determining how this is going to happen. Any design process goes through various stages from initial concept to final product. A generic process is illustrated below:

Project-Deliverables-Diagram

Now, this may be simple at a concept level, but can become incredibly complex at an execution level. Consider if you had a 500-person team working on a project and all of their activities needed to be coordinated. Even more crucial to the overall success is the management of deliverables – has every participant delivered his or her contribution to the project on time and to the required quality?

This is where an integrated PLM and project management system can be a powerful tool. In such a system, engineering and design deliverables are attached to tasks in a project plan, and the associated task can only be considered complete once this has occurred. If project management is executed using a standalone system, there is no link between task and deliverable; no way of knowing for certain if what is reflected in the project plan corresponds with reality.

So as a project manager, which would you prefer?

  1. A project plan that is disconnected from the required deliverables and may or may not reflect reality.
  2. An integrated system where the project plan is tied to engineering and design deliverables.

There are several consequences of such an integrated system:

  1. Project managers can immediately see if deliverables have been fulfilled. No requirement to verbally or formally query a project participant
  2. Milestone reviews can be conducted efficiently; either the milestone deliverable is in the system or it is not.
  3. Project managers are presented with real time status reports and dashboards. As a deliverable is attached to a task, the report is updated.
  4. There is no hiding the dates for completing a task. The timestamp of when a deliverable is completed is visible for all to see.

All of this allows for what the PLM world calls “automatic” or “invisible” project governance. Projects are self-governing, with all participants being aware of status in real time.

Wouldn’t you want this kind of system?

© Tata Technologies 2009-2015. All rights reserved.