One of the challenges presented by building a digital, customer centric, omni-channel experience on top of product centric back-office systems and processes is that it causes exponentially-increasing permutations of complexity and data flows.
In traditional systems, we have one application form per product, over above having built complex workflow systems to deal with the many possible permutations that may occur.
The main reason that companies struggle to transition to a digital, customer-focused approach is because the complexity of these systems increases exponentially.
Let’s start with the number of customer personas you have to cater for. Then, let’s consider the fact that options multiply by the number of channels you’d like to support. Next, it’s time to factor in the running of live experiments to achieve validated learning, further multiplying that answer by 3 to arrive at AML risk model permutations. Multiply that answer by the number of products you would like to support, then by 2 again for known and new customers. And so, the complexity keeps increasing exponentially.
When we consider the need for cross- and up-selling for all these permutations, we see that the level of complexity quickly scales beyond what we can model with traditional workflow systems.
The common “small experiment” mistake
When starting the digital and/or customer-centric journey, many companies make the mistake of declaring that they will start the journey with one single product, and one channel for learning, and then progress from there. While the learning and small experiment ideas are often valuable, in the context of origination, they immediately create lock-in to yet another product-centric solution. This approach thus tends to create multi-channelled products.
An alternative approach is to start with a single customer persona – one that has a very simple need. This will produce the same effect as reducing the scope and providing the opportunity for learning, but also spotlights the customer from day one.
This method will also expose the defects and any missing elements in a customer-focused journey. This distinction is incredibly important, yet it is not initially obvious as the first product launched will require the completion of a thoroughly considered end-to-end journey. Will a second product provide a full journey as well? Will we ask the customer to enter their ID number, first name and surname again? If not, what will we do if a new customer wants to buy the second product first?
By starting with the initial customer need, it becomes easy to progress to the next need. First, we will be receiving data from customers that clarifies what is important to them, so the prioritisation of work becomes far easier. A second benefit is that once the customer is onboarded, it is easier to offer them multiple products and options. A third benefit is a reduction in effort to add more products. Fourth, the improved customer experience will drive up conversion and increase customer satisfaction.
In this complex environment,
A modern customer-onboarding and product-origination system faces many challenges in addition to feature and compliance pressures.
On the customer-facing front, it needs to:
On the compliance front it needs to:
On the back-office processing front it needs to:
On the product-administration front it needs to:
Due to the proliferation of channel options, we need to ensure that business logic does not leak into the channel. However, any channel needs to remain free to drive the customer journey that is appropriate to their chosen channel medium. For example, a self-directed smartphone journey will be very different to a mediated journey or a self-directed, USSD feature-phone journey. This means that no process can be dictated by back-office business systems. We need a layer of abstraction:
The Channel owns the customer experience and is free to change the user flow and sequence of interactions, as well as run concurrent experiments on these.
The Business Systems can expect the data in the order and format they normally receive it, and should not be impacted by changes to any channel.
The Origination System needs to enforce the validation and verification rules and absorb these changes into the customer journey. This layer acts as a temporary data reservoir for receiving data in any order – and from multiple sources – and feeding it to downstream services as and when a given dataset is complete.
With an unknown information delivery sequence, the origination system needs to trigger calls to legacy systems and supporting services as and when data becomes available. For example, when we have an ID number and a verified identity, we can trigger a lookup in the business’ customer database or service. When facial recognition passes, or an OTP is correctly supplied, we can mark the customer’s identity as verified.
This technique allows for multiple sources to participate in the data collection for a given customer application. The customer can use one channel while being assisted by an intermediary on another channel, all while automation services might be contributing to the state of the application. Any trigger can fire based on the state of the application and this evaluation runs on every mutation of the application regardless of source.
The origination system needs to be able to deliver the payload to each of its peer systems in those systems’ respective formats. This implies that the origination system needs to be able to transform the application data based on the context. The payload may also need to be different per channel in order to optimise for channel-specific constraints.
Since data can be provided from multiple asynchronous sources, the origination system needs to keep the application data in all of the participating channels and systems in sync. This allows for an omni-channel customer experience, but also allows for long-running 3rd-party lookups and data-verification processes.
The SprintHive system uses a publish-and-subscribe mechanism that allows interested parties to read the current state of the application and to subscribe to future changes. The system will then keep the application state up-to-date until the party unsubscribes from its participation in the application process.
Keeping the channels in sync on a highly granular level prevents data collisions between channels, as well as between channels and external data sources.
The reuse of data for processing results is a good way to reduce customer effort and back-office inefficiencies. For example, why KYC the customer for every application? Why pay for a bureau call when you’ve made one for the applicant just yesterday?
The SprintHive system keeps a record of every interaction and data mutation in an event stream, and reduces this vast amount of data to a specific state per product. This results in the re-use of data that was initially captured or collected during an application process. Data capture in any application will be re-used in all future applications, removing the need to recapture the same information multiple times. The shared event stream means that this is accomplished without re-integration effort for every new product. Signing up for a second product then, would only require supplying the missing data.
This reduces customer effort and friction, further minimising back-office effort while speeding up the application process.
We understand that enterprises each have different stages of their service evolution, and that they will want to reuse the services that they have already invested in. The SprintHive system is built using extremely modular micro services that are easy to swop out. Our deployment fabric uses a software overlay network that makes it easy to route network traffic to alternative services. We also use internal and external facing API gateways to control API access and routing rules. All this means that it is very easy to add enterprise capability to the system, or to swop out system services for enterprise services.
The use of Reducers on the Event Stream make it easy to integrate new services in an exciting application processes, or to change data structures and formats.
The SprintHive objective is to always integrate rather than replace or duplicate. We have a set of pre-built integrations to common external services like bureaus, rules engines, government APIs and more.
There are many regulatory frameworks that govern the customer onboarding space – GDPR, FICA, RICA, RMCP, FATCA, AML and POPI to name but a few. There are also corporate-governance requirements on the onboarding and origination system. Finally, there are product rules that need to be met before they are issued.
The system needs to hold and guarantee a complete audit trail on every action taken against the application, and to have the ability to report on this in the prescribed format. No mutation to the application can be allowed to bypass the audit functionality.
At SprintHive, a concerted effort is made to automate any manual tasks for as many applications as possible. This reduces the lead time and should drive up conversion rates. The reduced back-office efforts will also result in an operational cost-saving on overtime. Furthermore, automation also decreases the capture-error rate and reduces internal fraud.
|Kubernetes||Container orchestration||Apache 2.0|
|Java||Programming language||GNU GPL|
|Kotlin||Programming language||Apache 2.0|
|Python||Programming language, ML & DS||PSFL|
|Gradle||Build automation & dependency management||Apache 2.0|
|Elasticsearch||Indexing & business metrics||Apache 2.0|
|Git||Version control||GNU LGPL 2.1|
|MongoDB||Document storage||GNU AGPL v3.0|
|Minio||Cloud storage abstraction||Apache 2.0|
|Spring Boot||Application framework and IoC container||Apache 2.0|
The system is built on a microservice architecture and is deployed into a Kubernetes container orchestration platform. Kubernetes is an open-source system for automating deployment, scaling, and management of containerised applications.
The system used the following features from Kubernetes:
The SprintHive platform was built with security as a first-class concern and thus employs state of the art security in the form of a platform enforced, secure service mesh with a multi-tiered approach to access control. Further details are available upon request.