FRIDAY, MARCH 24, 2017
Font Size
Industries Financial Services Making Use of DFLSS to Cultivate a Shared Compute Farm

Making Use of DFLSS to Cultivate a Shared Compute Farm

Digital check processing. Fraud detection. Compliance monitoring. Trading floor analytics. These are all integral functions for many modern financial service institutions. And all have one thing in common – they require a huge amount of computing power.

One approach to providing fast responses even with highly complex computations is distributed computing, or creating a shared compute farm. Instead of having one huge (and expensive) supercomputer that can quickly handle a massive number of computations, a compute farm works by linking together lots of regular servers or computers, each of which attacks a small piece of the puzzle. Making this work in a business setting is a matter of integrating hardware, software and people resources.

An international banking company that had a number of computation-intensive processes was struggling to find answers to its needs. The company’s IT group had purchased and lab-tested “blade technology” (named for the thin servers that are installed vertically), and was recommending this technology as the basis of a shared compute farm.

In most companies, the IT group would likely just go ahead and implement its chosen solution, in essence, forcing the users to come along for the ride. But this company decided to use a different approach: the IT team was charged with using Design for Lean Six Sigma (DFLSS) techniques to develop a solution for one particular application. In this case, the application was performing calculations used on the trading floor. If that worked out, the company would then explore using the same solution with other trading floors and other business lines, such as asset management or risk analytics.

Why DFLSS instead of the typical DMAIC (Define, Measure, Analyze, Improve, Control) approach used in Lean Six Sigma? DFLSS encompasses tools and methods that impose discipline into the process of defining multiple customers and their specific needs, converting those needs into design and functionality requirements, and then exploring alternatives for meeting those requirements. The result is solutions that achieve an appropriate balance between functionality, feasibility, cost and customer satisfaction.

Table 1: Example Statements of Customer Needs
VOC Category Example
Statement
Cost — Minimize cost of ownership of CPUs
— Optimize capacity on dedicated calculation engines
Infrastructure
Optimization
— Contingency and disaster recovery
— Peak usage coverage
Performance — Improve calculation speed
— Ability to maintain calculation speed
Service Level — Infrastructure is secure
— Infrastructure is reliable
Table 2: CTQ Performance Requirements
CTQ
Element
Target
Performance Level
Network Bandwidth
Utilization
Less than 15% increase
Percent Re-use
of Existing Servers
100% of Type 1 servers
Percent Utilization
of Calculation Engines
15 to 80%

Defining Customer Needs

DFLSS begins with a broad exploration of customers needs, refinement of those needs into representative voice of the customer (VOC) statements, and conversion of those needs into critical-to-quality (CTQ) performance characteristics that the final solution must meet.

The customers in this case were three internal divisions that had high computing demands – global structured products, mortgage-backed securities and corporate treasury. The team pulled together a focus group of 12 people from the three customer segments, and came away with a list of 16 VOC statements, which they converted to 20 CTQ characteristics. Tables 1 and 2 show a sampling of these needs and targets.

Exploring Potential Solutions

Once a team has clarified the needs and performance targets that must be met, DFLSS methods require an exploration of alternative solutions. This team began by identifying three software products that could meet its needs. The final choice was a product that:

  • Completely addressed the CTQ requirements (generated from the VOC data).
  • Had predicted capabilities (Six Sigma quality levels) that out-performed existing mechanisms for distributed computing tasks.
  • Had built-in error proofing/correcting functionality. (For example, the software would provide alerts whenever the primary system was stretched beyond capacity so the infrastructure would know to shunt some tasks through other systems.)

The team also was allowed to look beyond the original blade technology solution, and came up with two alternatives:

The Desktop Solution – This global company had trading floor operations in many markets around the world. And while some market is open at any given hour of most days, no market is open 24/7. That meant the desktop computers serving any particular market would be sitting around idle for a good part of each day when that market was closed. As computer geeks put it, “a CPU is a CPU” – and all these idle desktop computers represented a lot of potential computing capacity. Surely the company could tap that capacity. In “compute-farm speak,” this is phrased as, “Why not share an acre of land in another country?”

Outsourcing on Demand – This solution requires purchasing computing capacity at third-party providers via secure networks.

As team members discovered, each of these alternatives generated additional VOC requirements that were not part of their original thinking. Tapping into desktops, for example, would require some way to make sure that the regular work done on those desktops was not interrupted. Outsourcing brings with it a lot of requirements around security and confidentiality.

Implementing the Hardware Solution

As it turned out, all three hardware solutions had a lot of potential, with none turning out as vastly superior to the other two. This posed a dilemma: should the team expand its focus and try to implement all three solutions or randomly pick one? Trying to do everything is the classic scope creep that has stopped many projects in their tracks.

But since all three options seemed viable, the company management adopted a multi-generational approach. The original team was to carry through with implementing the blade technology. At the same time, two new teams were configured, one to continue down the path of desktop utilization and a third to investigate outsourcing.

In the end, the company purchased the selected software product which, thanks to the up-front analysis, is doing exactly what the company needs it to do. The company has already installed blade server technology and developed desktop utilization options for its trading floor operations in at least three time zones. The third team is still working on the outsourcing option. Plus the IT group is responding to interest from other computer-intensive operations within the company. The co-op farm is growing.

The Benefits of DFLSS

Had this company followed the path originally planned by its IT group, the company would have invested a lot of money in purchasing and installing blade technology. And that probably would have worked, at least up to a point. But by following DLFSS methods, the company now has greater flexibility for meeting its customers’ functional needs, is getting more leverage from its software purchase, and has established a low-cost model that can now be adapted to other operations.

Register Now

  • Stop this in-your-face notice
  • Reserve your username
  • Follow people you like, learn from
  • Extend your profile
  • Gain reputation for your contributions
  • No annoying captchas across site
And much more! C'mon, register now.

Leave a Comment





Login Form