Tag Archive: PRPC


One of the questions clients often ask me, is how can new developers to Pega learn the product quickly? Or, more appropriately, what skills should they be looking for in the people they’d like to move into their Pega practice?

While it’s technically true that anyone is theoretically capabile of learning the product, there are, however, some skillets which I have seen yield better results on average than newcomers without the same skillets. This is not meant to take away from allowing business users to use the system and manage rules –  this is geared more towards the technical folks who will be doing the design/development of the application.

The skill I would consider most beneficial when moving into being a Pega System Architect would be:

A strong understanding of Object Oriented Design & Principles (background developing in an OO language helps)

Pega’s product is built on JAVA, and produces JAVA code behind the scenes that is executed at runtime, but this recommendation has less to do with that aspect, and more to do with the overall design of both the OOB rules & class structure, and the designs of applications built within Pega. The idea of objects & their relationships is highly evident within Pega applications. Class Structures, and reusability of objects, attributes (properties), and other rules is carried out via inheritance paths. A good understanding of what an object is, how it relates to other objects, how it inherits properties & actions from its parents is a HUGE help in learning the product, and learning how to design well within it.

Additional skillets that I’ve seen be beneficial are:

  • Understanding of Integration types – Web Services, Queuing Mechanisms, File, HTTP, SQL, etc….
  • Understanding of HTML & XML, and to a lesser extent Javascript & AJAX
  • Understanding of logic. If then else & boolean expressions
  • Understanding of Relational Databases & their components
  • Understanding of Enterprise Architecture, WebApp Deployments/Architecture
  • Understanding of the concept of “work” and business process flows (workflow)
  • Business & Domain knowledge help as well, as it may be turned into data objects and rules within PRPC

As well as general software development basics such as:

  • Understanding of SDLC and various methodologies – especially agile/iterative ones
  • Understanding good design approaches and conventions
  • Understanding troubleshooting & testing techniques

One exercise I’ve found beneficial when training developers new to Pega/PRPC is to design out an application in their native OO language using such things as UML, Entity-Relationship Diagrams, Use Cases and Process flows and then design out the same application in Pega. While the syntax and the “rules” we use within Pega are a bit different, the general design concepts translate over pretty well. For example, within JAVA we have classes with attributes, methods, and constructors – and those classes can extend (or be extended by) other classes. In Pega, we also have a class structure and within each class we have properties, activities, and models. In addition, such things as decision logic (and virtually all structured functionalities) are abstracted out into their own rules within Pega for easy reusability by inherited classes & other rules.

While certainly much more goes into learning Pegasystems BPMS solution, I hope this is a good overview of some beneficial skills  that may help newcomers when first attempting to figure this stuff out!

This is a follow post to my last blog post, Eight Tips for Long-Term Success with your BPMS, taking a deeper look at one of the tips within.

In it, I wrote:

Tip #8: Implement automated governance to watch code quality. A good automated governance solution will match code against design/development guidelines and prevent it from being checked into the rulebase if it doesn’t meet those guidelines. In addition, creation of reports and an easy-to-use dashboard/portal can host a wide variety of reports to help ensure quality code is being delivered within your tool. Evolve this over time as design/code reviews, and multiple iterations begin to show you where there are gaps.

To which David Brakoniecki (@dajb2) commented:

This is a great list of BPM implementation tips but I am intrigued by #8. Can you expand on this point?

By automated governance, I seems like you have rules that do static analysis of the code quality inside the tool. Is this a feature built into Pega or have you written a custom framework to deliver this functionality?

I responded to his comment, and answered the Pega specific question there on that post, but I’d like to take the conversation one step further here.

Just what does the term “Automated Governance” mean?

In this sense, I’m referring to automating, as much as possible, the governance process that ensures the quality of the deliverables within your implementation.

Just what should this governance process entail?

Your governance process should entail all of the following, even if it’s being done as a manual effort, for now:

  • Checks that Enterprise & Area standards are being followed
  • Checks that the BPMS vendor guardrails are being followed
  • Checks that your methodology/process is being followed, including documentation
  • Checks against design/development coding standards are being followed
  • Checks for proper error/exception handling are in place, especially for integrations
  • Checks for proper security & access models are followed and monitored
  • Checks for performance risks
  • Checks for proper code documentation, naming standards
  • Checks for placement of code for best reusability
  • Ability to update/report/search asset library to enable reusability
  • Proper metrics/reporting by User for accountability purposes

If you aren’t doing one or any of these currently, implementing such governance can go a long way to ensuring long term success and quality of the applications being delivered within your BPMS. Once the process is in place, you can hopefully start implementing tools and additional software, generally within the BPMS tool itself to automate reporting and monitoring for these items.

How to Automate?

A good BPMS product will already have some out-of-the-box tools and reports that should help you get started, add to those with your own to help complete the picture. The best way to automate your governance is to prevent bad code and ensure guardrail compliance automatically at development time. You’re implementing software within another software tool, enhance it to aid in preventing non-compliance to defined best practices! For the scenarios you can’t prevent, at a minimum ensure that you can report on them to follow up, and look for trends on your reports that are improving over time.

For example, within Pegasystems PRPC BPM solution, there are several OOB reports I leverage, and I use the tool itself to build the additional things I need.

These include:

  • Enhancing the OOB Preflight report to provide username
  • Creation of a custom Rule-Portal instance and related gadgets for an “Automated Governance” Reporting Dashboard
  • Developer productivity reports
  • Rule Volatility Reports
  • Use custom rule errors that are checked when rules are saved during development, to reject the changes when they break your gaurdrails
  • Addition of custom rule warnings that are checked when rules are saved, these warnings show up on the Preflight report
  • Reports on what users are creating the most warnings in the last 7 days and last 4 weeks for trending purposes
  • Reports on overall warnings over the last 90 days for trending purposes
  • Ability to find warnings by type, severity and aggregate as needed
  • Ability to tie opportunities for improvement back to individual users
  • Ability approve creation/check in of certain rule types for tighter control
  • Enhanced reports regarding OOB rules that have been customized by the client
  • Reports to track the same rule being modified by parallel initiatives
  • Custom reports that interrogate the code base for more complex risk patterns

I recommend creating  a specific dashboard/portal managers can log in to to run the reports on-demand, and we’re currently discussing what their needs/desires are to have certain key reports automatically generated, attached to an email, and sent to the managers without the need for them to manually login.

The Key to All of This: Accountability!

You might notice many of the reports ultimately tie back to the individual users/developers. This is key. Nobody likes being singled out, and generally, nobody likes to be the bad guy singling other people out either, BUT without accountability, the quality of your application code and ability to reuse it properly will be mediocre at best. For proper excellence, you MUST hold people accountable for their actions (or lack thereof). At the end of the day we have human beings typing things into a keyboard that ultimately form the code that runs your application. The same code that will continuously be built on top on for years to come as you add features, make improvements, and expand your user base.

Use the report findings as teaching moments to educate the team members who are consistently showing up on the reports. Or, perhaps in a multi-team environment, you might notice the issue stems from a single team, perhaps that’s an opportunity to talk with the senior designer/developer on that team that may or may not be making recommendations to other team members, or perhaps there’s a gap the process somewhere and a need for a better checklist in a design or code review.

Implemented correctly, and following up on report results in a consistent manner should result in two trends:

  1. Quality & Reusability of code increases
  2. Dings on the Reports decrease

The Problem / Pattern.

I’ve seen a common pattern emerge across several of the client applications that I’ve had the pleasure to work with, and that pattern is a desire to implement greater flexibility within the process than is currently implemented.   The issue wasn’t that the application was necessarily designed incorrectly, and may have been delivered exactly as the requirements and business asked for, completely approved and signed off on with great celebration.  Instead, it seems that as the comfort level of the business unit leaders of the Pega technology & the users using it increased over time, and the end users themselves become more comfortable working within the application, there was a greater desire to have a less rigid, and more flexible goal-driven process putting some of the control back in the hands of  the knowledge worker.

I’ve identified  4 major factors leading to this desire:

  1. Original Requirements took a very linear approach to the process to simply standardize every step, decision point, and detail of the process. The need for a flexible processes left up to the knowledge worker themselves was not even been considered at this time. It was do it this way, one piece at a time.
  2. Information can change. Especially in long-running case work, new or updated information may need to be added to the case at any point in time, which may affect overall end-to-end process.
  3. As the knowledge worker matures, both in application use and business expertise, business leaders have a stronger desire to let them work most efficiently and make smart cognitive decisions on their own when desirable. They are after all, knowledge workers, not robots.
  4. Mistakes happen. What do you do in a strict linear processes when 3 steps later the user realized they made a mistake earlier in the process, if you haven’t designed for these scenarios ?

In the case of original requirements taking a very linear approach to the process, many times the real driving motivator for that wasn’t that the process itself could literally only be done one way, it was fear of variance of quality and/or lack of confidence in the end users to do the right thing at the right time, every time.

In the case of changing information, the longer end-to-end processing time of a single unit of work, the more likely that information could have changed, new information been found, or some other external factor could come up that may play a role in what to do with that unit of work next.

As far as the knowledge worker is concerned, initial fears of poor quality can often be mitigated by good training, early exposure to the application,  good quality control reporting, and smart functionality built within the process to perform validations where needed so the user can be corrected, if needed.  And, when mistakes are made, the user simply needs a way to easily correct their mistake on their own, so that they can continue with their work, without the need for IT or support involvement (and that can be time consuming!).

The Process.

While each of these applications were had very complex end-to-end processes, and while it’s true major parts of these processes did need to occur in a specific order, at any one segment of that process many or all of the sub-processes/tasks could ultimately be performed in almost any order and be considered valid.

One  example being a Fraud Investigation application for financial institutions. The key parts of the processes may be:

  1. Alert / Awareness of potentially fraudulent activity
  2. Location / Aggregation of similar Alerts and/or current or previous Investigations
  3. Triage /  Decide to Continue or Not
  4. Investigation
    • Research Parties (suspects, victims, other)
    • Enrich with Data (transactions, accounts, etc…)
    • Contact Necessary Parties for Additional Information
    • Potential Law Enforcement Involvement
    • Evaluate Risk to Bank Assets
    • Determine patterns of behavior
    • Determine compliance to regulations / bank rules
    • Verify follow-up tasks to be taken
    • and more!
  5. Disposition Investigation (Is this or is this not fraud?)
  6. Multiple Levels of Approvals, Rejections, or Requests More Info
    • Review / Approve Investigation
    • Review / Approve Disposition
    • Review / Approve Info to be sent to Government
  7. Submission to Governmental agencies
  8. Follow Up Activities
    • Close Accounts
    • Follow up with Law Enforcement
    • Contact Customer via Mail if necessary
    • Financial settlement with customer and/or other banking institutions
    • and more!

The application really had two key units of work, Alerts and their related effort which mimicked steps 1-5 above, and full blown Cases and related effort which copied steps 4 & 5, and added steps 6-8. Cases implemented Pega’s cover functionality to group one or more alerts, and were the bulk of the knowledge work.

While the end-to-end process does need to occur in a particular fashion, there’s no valid reason some of the sub-tasks couldn’t be done in any order the real-time  scenario/information at hand calls for.  If you control every step of the process in a minute and rigid manner, users are prevented from dealing with exceptions that require flexibility and as a result  start to  second guess when to move on to the next small tasks for fear of finding new information for an earlier step at a later point in time. Anytime a user must alter their behavior to use the application, the process is broken.

In this particular example, under the rigid process, business users determined a work around of completing the entire investigation and working with management to reject the submission so that it could come back to the investigator and restart, where new information could be added, or information from an earlier part of the process could be corrected. Ideally, that flexibility should be built into the tool, without users needing to work together to form work-around solutions.

The Solution.

There are ways to add this flexibility with the same standard flows used in the rigid, linear approach – however – keeping in mind ease of use, easy access to both retrieve/update data, and a pleasing UI, we decided there was probably a better approach.

The design solution to implement this new flexibility into a still very large, complex process was to leverage Screen Flows within Pega. Screen Flows come with simple back/forward wizard like capability right OOB – BUT they do have some limitations that must be designed around. There a few UI patterns than can be used when implementing Screen Flows, my personal recommendation being Tree Navigation. Tree Navigation allows for an easy to use collapsible tree to be displayed on the left hand side of the UI, that is easily clickable by the end user to jump around to different screens (tasks) of data which become nodes in the tree. The use of screen flows allowed us to design process flows so that the assignments and sub-flows within them could be turned into nodes within the tree dynamically at runtime.

Now, there is still a need to control the entire end-to-end process, and we only want to limit the flexible portions to User actions, within each major part of the process. This was accomplished by using standard flows to “control” the entire process, end-to-end. These standard flows took care of when to move to the next step in the process, at what point & where to go to if the need arose to go back to an earlier point in the process (leveraging tickets), and essentially created the controlling process framework, if you will, for the application. The sub-processes, and all UI components were built into specialized screen flows for each part of the process. The appropriate screen flows were then referenced in the appropriate place in the controlling process framework.

At each sub-process, the nodes in the tree essentially comprised of two different types of nodes:

  1.     Supporting Data Nodes that could be accesses to be viewed or updated with a single click
  2.     Process Nodes that would finish the resolution of this step, exit the sub-process and hand control back to the process framework

The Supporting Data Nodes were mostly made available to every sub-process along the entire end-to-end path with a few exceptions. Nodes within the tree were grouped, and sometimes nested in a parent-child manner to provide logically grouped sets of data where needed that could be expanded/collapsed at will.

The Process Nodes contained the actual goal task(s) for the overall step in the process, and were upgraded to included both manual checkbox validation that certain sub-tasks were completed by the knowledge worker as well as system performed validation. Both the user validation and system validation were required to pass before passing control back to the process framework.

The redesign/development effort was split into two phases within logical divisions of the application. The first phase comprising the first 5-6 weeks of the effort, and the second phase comprising of the last 9-10 weeks, and the bulk of the effort.

Design Considerations.

Screen flows have some limitations that need to be designed around. In addition, some additional design considerations were made to correctly handle re-use, and all other best practice design guidelines, all while staying within Pegasystems guardrails.

These were:

  • No SLA Support from Screen Flows, but need for SLAs was critical
  • No Workbasket to Operator Support within Screen Flows
  • Screen Flows have limited flow shapes available for use, and essentially require a fairly linear flow design at development time
  • Showing/Hiding Nodes Conditionally (e.g. certain users are not allowed to see certain data)
  • How to develop common sets of nodes  once and reuse across the specialized subprocesses
  • Dynamic data-driven creation of nodes (e.g. adding a node for each suspect we had information on)
  • Dynamic tree node label generation at runtime (e.g. adding a # on the label  to show how many suspects are present)
  • How to direct which node to go to next if you submitted data on one node and immediately wanted to be taken to another, non-sequential node.
  • Ability to have the first default node be a node other than the first node in the tree, upon entry to the screen flow
  • How to handle already in-flight work, bringing them into the new process and into the correct spot of the process

Each of these considerations were designed around following Pegasystems guardrails, and worked beautifully in a reusable, maintainable fashion using OOB functionality.

The Result:

The existing application was completely redesigned & delivered to production with the above solution in the span of 14 weeks with an initial small team of 3 dedicated technical resources, and 1 business analyst resource. This included all analysis, design, development, and testing. In order to accomplish testing so many changes in such a small time, code pushes were made to the testing environment almost daily and a very carefully planned strategy to re-use the same ruleset version was leveraged and managed very closely during this effort.

This design approach resulted in much greater flexibility for the knowledge worker to perform sub-tasks at the optimal time based on real-time data, and eliminated the inability of rigid process flows to handle unpredictability. In essence, this was goal driven processing by the knowledge worker rather than strict process management. The end-to-end process was not disturbed, but was enhanced and made more powerful and able to handle real world scenarios that cannot be predicted ahead of time.

In addition to injecting flexibility,  greater access to data, and ease of use for end users, a backlog of 30+ change requests were folded into the work effort as well.  Specialized tools were created and executes as part of the release effort to capture current in-flight work, perform necessary cleanup and inject them into the new processes at the exact appropriate location.

Business Unit leaders were thrilled with the quick turnaround, the new ease of use, and the addition of the 30 backlogged CR items, that had been accumulating since the initial release. End users and managers no longer needed to step through convoluted work around procedures to handle the need to correct an earlier step of the process, and the data was now literally at the fingertips of the knowledge worker, a tree node click away.

An added bonus of the new design meant greater ease of maintenance for future flow processing modifications, as the risk of a modification resulting in flow errors was diminished due to all user assignment being withun screen flows.

Critical Success Factors:

Several key factors played into the success of this aggressive effort, which was very large in terms of changes to the application and sheer amount of rules to be modified/created.

  • Business Unit leaders were made available for quick access during entire effort to Business Analyst and Pega System Architects
  • Experienced, skilled resources were tasked, left alone, and allowed to do the things they needed to do
  • Loss of a critical resource after the 10th week was quickly backfilled with 4 additional resources
  • Daily collaboration between QC testing team and delivery team
  • Experienced testers who were already familiar with the business functionality of the application
  • Design used OOB mechanisms that stayed within product guardrails

Takeaway:

It’s important to discuss this pattern and need for goal-driven flexibility within the process  upfront with your business clients. Do not let fear pigeonhole your team into only delivering a very rigid linear process that cannot predict real-time scenarios and react appropriately.  The value of flexible goal-driven processes will be apparent to your business clients when you start discussing exceptions and how to handle unpredictable data and do not let yourselves only focus on the big picture process.

If you’d like to hear more about either the solution details that handled the various design considerations, or how a similar approach could be implemented for your application, please contact me and I’d be happy to  have that conversation.

Go back

Your message has been sent

Warning
Warning
Warning

Warning.