Tag Archive: Code Quality


Any BPMS solution worth its salt should provide efficiencies over time in the form of reusable assets in your code base.

There are several ways to to do this by design:

  • Parameterize your code/rules as much as possible
  • Define a proper object model / inheritance paths
  • Properly name, comment, and document your code/rules
  • Place your rules within the appropriate class to be reused
  • Develop rules in small and distinct, but meaningful pieces

By doing all of the above, the code you design and built should become more and more reusable over time and the applications you build today can be used as frameworks for the applications you build tomorrow and beyond.  Alternatively, you could of course run out and purchase a set of pre-built rules and code that serves the general purpose you’re looking for and configure/customize it to meet your needs. These are, in essence, the definition of a Framework.  I would argue that the BPMS tool itself is a very generalized framework used to built out BPM or BRE type applications within it, but what I’m discussing here are the frameworks of code/rules that sit on top of your basic BPMS install.

What is an example of these kinds of frameworks?

Consider the world of Insurance, and within that, the world of Claims. A single large insurance company may sell insurance policies for life, home, and auto, and naturally all insurance policies come with the ability to file a claim against your policy to extract value. While each type of policy will have some specifics in regards to the types of claim, we can find a lot of similarities between the three. It’s these similarities we could use to build out a Claims Framework that could be leveraged to built out applications for the individual lines of business to customize as needed. In our example these would include object models and related integrations, decisions, and business rules around customer information such as name, address, phone numbers, bithdate; basic constructs of policies such as policy number, date of issue; any servicing agent information, billing information, and basic constructs for filing a claim, retrieving policy information, etc…

Some vendors like Pegasystems sell add-on frameworks to do such things as Customer Management in the call center, Fraud Case Management, Retail Banking, Insurance, Healthcare, etc… These are also great framework starting points, but do have some downfalls discussed later in this post.

All businesses have basic and core concepts to their business that can be reused across applications, and all of these types of data and rules should be build out within a framework, that all of your other applications will sit on top of.
What then, are some benefits of utilizing frameworks?

The benefits of using frameworks, and really of any type of rule reuse efficiencies, is so that the rule only has to be maintained in one location, and should the need arise, modifying it in that one place will automatically let all the applications built on top of it pick up the new changes without additional code changes. This should ultimately reduce development and testing time, improve speed to market, and ensure consistent code is shared where it should be, easily maintained over time.

So, that’s some great news about using frameworks, what about the bad and ugly stuff, the stuff nobody wants to talk about you ask? Well, I’m going to tell you – with a bit of a disclaimer – I don’t have any inherent issues wrong with frameworks themselves, but it’s poor decisioning and application around the use of them that make up this next section of the blog.

So, here’s what to avoid when it comes to frameworks:

  • Don’t overdo it.

Too many frameworks aren’t practical and you end up having to use the same sets for anything anyway. Create your enterprise framework, perhaps frameworks for your internal divisions (if your company is large and diversified enough, and then application level frameworks such as the claims example up above.  If you buy an external framework, that’s great and all, but when you start building frameworks for other other frameworks, you’ve probably gone a bit overboard.

  • Don’t purchase a framework you’re not really going to use.

Just because you like 20% of what that framework does, don’t purchase it, just to throw out or completely re-customize the other 80%. It will be cheaper and less of a headache for you long term to just build your own framework on top of the base BPMS install.

  • Don’t build out your framework by not following proper guardrails

By just customizing the-ever-living-snot out of rules by hardcoding lots of stuff, custom java, html, javascript, etc… You’re not going to be happy when the base BPMS tool come due for an upgrade and you find out that because you customized so much stuff that either stuff is now broke, or you can’t take advantage of cool new OOB features. Be vigilant about following proper design and development guidelines and guardrails within your framework (you should always do this, but even moreso within a framework that will have additional application built on top of and dependent upon this code!)

  • Not all frameworks are created equal.

Say you’re looking for a claims framework, and you’ve decided to purchase one from the software vendor, or an outside third party – don’t assume that all other companies do things exactly like your company. On a high level, one would think that most claims applications are pretty straight-forward and will be somewhat alike. That’s true, however, what tends to be VERY different between companies is how they like to keep and structure their data and object relationships. This is the kind of stuff you should be hoping to benefit from within your framework as well as generic processes you can tailor, but you need to at least do due diligence to see if the framework is really going to work for you. Read the previous bullet point again if you’re unsure what I mean here!

One last note about frameworks – take your time designing out your framework because you’ll be building potentially multiple applications on top of it, and those applications will go through multiple versions, etc… take the time to get it right! For some additional tips for success within your BPMS implementations, please see my earlier blog post here.

This is a follow post to my last blog post, Eight Tips for Long-Term Success with your BPMS, taking a deeper look at one of the tips within.

In it, I wrote:

Tip #8: Implement automated governance to watch code quality. A good automated governance solution will match code against design/development guidelines and prevent it from being checked into the rulebase if it doesn’t meet those guidelines. In addition, creation of reports and an easy-to-use dashboard/portal can host a wide variety of reports to help ensure quality code is being delivered within your tool. Evolve this over time as design/code reviews, and multiple iterations begin to show you where there are gaps.

To which David Brakoniecki (@dajb2) commented:

This is a great list of BPM implementation tips but I am intrigued by #8. Can you expand on this point?

By automated governance, I seems like you have rules that do static analysis of the code quality inside the tool. Is this a feature built into Pega or have you written a custom framework to deliver this functionality?

I responded to his comment, and answered the Pega specific question there on that post, but I’d like to take the conversation one step further here.

Just what does the term “Automated Governance” mean?

In this sense, I’m referring to automating, as much as possible, the governance process that ensures the quality of the deliverables within your implementation.

Just what should this governance process entail?

Your governance process should entail all of the following, even if it’s being done as a manual effort, for now:

  • Checks that Enterprise & Area standards are being followed
  • Checks that the BPMS vendor guardrails are being followed
  • Checks that your methodology/process is being followed, including documentation
  • Checks against design/development coding standards are being followed
  • Checks for proper error/exception handling are in place, especially for integrations
  • Checks for proper security & access models are followed and monitored
  • Checks for performance risks
  • Checks for proper code documentation, naming standards
  • Checks for placement of code for best reusability
  • Ability to update/report/search asset library to enable reusability
  • Proper metrics/reporting by User for accountability purposes

If you aren’t doing one or any of these currently, implementing such governance can go a long way to ensuring long term success and quality of the applications being delivered within your BPMS. Once the process is in place, you can hopefully start implementing tools and additional software, generally within the BPMS tool itself to automate reporting and monitoring for these items.

How to Automate?

A good BPMS product will already have some out-of-the-box tools and reports that should help you get started, add to those with your own to help complete the picture. The best way to automate your governance is to prevent bad code and ensure guardrail compliance automatically at development time. You’re implementing software within another software tool, enhance it to aid in preventing non-compliance to defined best practices! For the scenarios you can’t prevent, at a minimum ensure that you can report on them to follow up, and look for trends on your reports that are improving over time.

For example, within Pegasystems PRPC BPM solution, there are several OOB reports I leverage, and I use the tool itself to build the additional things I need.

These include:

  • Enhancing the OOB Preflight report to provide username
  • Creation of a custom Rule-Portal instance and related gadgets for an “Automated Governance” Reporting Dashboard
  • Developer productivity reports
  • Rule Volatility Reports
  • Use custom rule errors that are checked when rules are saved during development, to reject the changes when they break your gaurdrails
  • Addition of custom rule warnings that are checked when rules are saved, these warnings show up on the Preflight report
  • Reports on what users are creating the most warnings in the last 7 days and last 4 weeks for trending purposes
  • Reports on overall warnings over the last 90 days for trending purposes
  • Ability to find warnings by type, severity and aggregate as needed
  • Ability to tie opportunities for improvement back to individual users
  • Ability approve creation/check in of certain rule types for tighter control
  • Enhanced reports regarding OOB rules that have been customized by the client
  • Reports to track the same rule being modified by parallel initiatives
  • Custom reports that interrogate the code base for more complex risk patterns

I recommend creating  a specific dashboard/portal managers can log in to to run the reports on-demand, and we’re currently discussing what their needs/desires are to have certain key reports automatically generated, attached to an email, and sent to the managers without the need for them to manually login.

The Key to All of This: Accountability!

You might notice many of the reports ultimately tie back to the individual users/developers. This is key. Nobody likes being singled out, and generally, nobody likes to be the bad guy singling other people out either, BUT without accountability, the quality of your application code and ability to reuse it properly will be mediocre at best. For proper excellence, you MUST hold people accountable for their actions (or lack thereof). At the end of the day we have human beings typing things into a keyboard that ultimately form the code that runs your application. The same code that will continuously be built on top on for years to come as you add features, make improvements, and expand your user base.

Use the report findings as teaching moments to educate the team members who are consistently showing up on the reports. Or, perhaps in a multi-team environment, you might notice the issue stems from a single team, perhaps that’s an opportunity to talk with the senior designer/developer on that team that may or may not be making recommendations to other team members, or perhaps there’s a gap the process somewhere and a need for a better checklist in a design or code review.

Implemented correctly, and following up on report results in a consistent manner should result in two trends:

  1. Quality & Reusability of code increases
  2. Dings on the Reports decrease