In financial services, the dangers associated with monetizing big data are nearly as great as the rewards. The promises of machine learning, data science and Hadoop are tempered by the realities of regulatory penalties, operational efficiency and profit margins that must quickly justify any such expenditure.
Additionally, there is often another layer of complexity best summarized by Carl Reed, who spent more than 20 years overseeing data-driven applications at Goldman Sachs and Credit Suisse. According to Reed, who currently serves as an adviser to the Enterprise Data Management Council, PwC and Cambridge Semantics, the sheer size of the customer base and infrastructure of larger organizations compounds the issues. He says, “Credit Suisse [had] 45,000 databases; Goldman Sachs [had] 90,000 databases. If you let the financial industry continue to implement technology vertically versus horizontally—because you’ve got no C-suite saying data isn’t a by-product of processes, it’s an asset that needs to be invested in and governed accordingly—you’ll end up with today 10s, tomorrow 100s, and in time 1000s of instances of Hadoop all over your organization.”
The most acute lesson learned from Reed’s tenure within financial services is to avoid such issues with a singular investment in data architecture that pays for itself over a number of applications across the enterprise. In doing so, organizations can justify data expenditure with a staggering number of use cases predicated on variations of the same investment.