When you look at some of the leading RPA providers, Blue Prism and Automation Anywhere, they have been around since 2002 or 2003. So why did it take until 2014, or probably 2015, for the market to catch on to what they can offer?
These are the survivors. The ones who persevered. And I am sure there are dozens more who gave up or had to give up before the market really turned.
Even now, RPA users are probably only using 1% of its capabilities across their organisations. Maybe they have deployed it for some F&A processes, but they will not have deployed it wholesale across every possible process, in every possible function across their whole organisation.
But why did it take so long?
At its simplest RPA takes processes that are highly rules based and repetitive, and automates them.
This work has always been done by people. Maybe they were onshore, maybe they were overseas; maybe the work was in house, maybe it was outsourced.
The rules are put into the RPA platform and the platform operates them 24 hours a day, with no errors, no breaks, no issues.
Of course, the real-world is more complicated, which is where the platforms get more intelligent.
They can either identify an issue and:
- Point it out to the supervisor to resolve,
- Recommend solutions to the supervisor, based on what it has learned, or
- Actually implement solutions to the problem
This is where RPA starts getting into the Cognitive or Machine Learning space. Not only is it running processes, but it is identifying issues and actually trying to solve them, with zero or at least limited human intervention.
The platforms themselves are built on a range of technologies, and that is certainly not what defines them. Rather, it is the simplicity of their implementation and operation that is key.
RPA platforms sit across existing technologies. They are thin-client, non-intrusive applications and don’t need to be integrated or rely on APIs. This alone makes them faster to deploy and easier and cheaper to manage.
You can decide whether to host them in the cloud or on internal networks. That decision normally depends on the company and its existing policies, driven mainly by the sector it is in and the nature of the data going through the processes.
The Business Case
Not only is the technology incredibly powerful, the cost of implementing and operating it is remarkably low.
But a more complete business case is driven by the following factors:
- The bot will cost 20% (or less) of a human
- The investment to implement and get started is measured in low $000’s
- Implementation times are measured in weeks, not years
- It can impact almost all functions
- One person can supervise 20 bots, not 10 humans
- The bots work 24 hours a day
- The bots don’t go sick or take holidays
- The bots will break down sometimes, but the rest of the time they are 100% accurate – no human errors
- The bots leave a clear digital audit trail, so every transaction can be reviewed easily and issues addressed faster
Compliance and Auditing
While much of the focus is on productivity and financial benefits, there is significant value from just getting processes right, 100% of the time.
Humans are prone to errors. Machines are not.
If you can get your transactional activities working correctly all of the time, then you can:
- Remove the time you need to fix errors or mistakes
- Increase your customer (or supplier) satisfaction, because everything is “right first time”
- Reduce the workload on staff reviewing transactions to try to find out what went wrong
If it is a regulated process, then the compliance of that process to the regulations is all tracked. When anyone wants to review exactly what happened, the trail is there to see and follow. What’s more, they can do the audit from anywhere, without disrupting the workflow.