August Community Newsletter: How a Data Refinery Drives ROI

Philip Brittan, Crux CEOBuilding New  Partnerships: Hello from the CEO!

As the volume, velocity, and variety of data increase, firms face the challenge to find and use data without taking on its operational burdens. It’s a new-to-the-world problem—one that companies can only solve by building the right partnerships with each other.

Crux enables these partnerships. Our service helps data flow optimally between people.

Crux seamlessly connects people with the data sources they need. Our secure data pipelines validate and standardize data from any source, and deliver that data instantly to our growing network of clients. Connecting to Crux replaces the need for companies to build infrastructure to wrangle data. The right data, in the right formats, is already here, helping people move faster to insights.

Now our team is creating new connections and expanding the network of delightful data for our clients. We’ve built partnerships with three major market data suppliers, providing equity, pricing, fundamentals, fixed income, and corporate actions data. We’re also collaborating closely with leading suppliers of mortgage, risk, and geolocation data. These companies recognize the potential of a network to deliver data more rapidly and more valuably—with less burden placed on the individual.

Data is an infinitely abundant resource. It flows among us at unprecedented speed and brings the industry tremendous potential. Ultimately, the companies who unlock the true value of data will be those who find new and valuable ways to connect.


How a Data Refinery Drives ROI

Your firm needs a strong data strategy in order to generate ROI on the data you’re purchasing. By 2020, firms will spend more than $7 billion on alternative data sources, from geolocation data from mobile phones, to images from satellites and drones, to social media sentiment and news, according to a study by Opimas. Yet to translate data into competitive advantage, firms need to take the next step: finding an efficient way to integrate data, map data into optimal formats, and seamlessly distribute it to users.

Crux has built that next step for you. Our data refinery service extracts data from any source, cleans the data, applies standards to make the data easy to work with, and delivers the data instantly across your firm to the people who need it. The simplicity and ease of a data refinery removes the vast majority of data’s costs. Firms become free to create value from data, instead of wrestling with its quality and structure. A cloud model replaces the costs of big data warehouses and slow software systems, and frees up data for your teams to use more creatively, helping you make new discoveries.

We’re not surprised to learn that 60% of institutional investors plan to outsource data management over the next three years, according to State Street. With delightful data pipelines and deep data engineering talent, Crux is ready to refine your data and help you generate ROI.


Opportunities

    • Are you intellectually curious and looking to build the future of data infrastructure? Do you love solving complex problems and helping clients overcome data challenges? If so, let’s talk! The Crux team is growing and seeking highly skilled talent that believes data can be delightful. Check out our job postings here.
    • Are you a data supplier looking to reach more clients with your data? Our data supplier network is growing quickly! Crux has a diverse list of datasets ranging from stock quotes to corporate trends to transportation data and more. See our network and create your own profile.

Meet Us at these Upcoming Events


Insights from Industry Events

 

Citi’s Beyond the Basics Conference: May 22, Napa Valley, CA

Key Insight: Banks are shifting data to the cloud, applying streaming ETL tools, and integrating tech systems to help people collaborate. In 2018, CIOs understand that people need adaptable, personalized tech and will quickly move on if tools cannot deliver.

Eagle Alpha Alternative Data Showcase: May 31, Lowenstein Sandler, NYC

Key Insight: By Q4 2018 or Q1 2019, asset managers will embrace the benefits of alternative data. In 2017, more than 78% of U.S. hedge funds made alternative data part of their data strategy. Compare that figure to 52% in 2016. (Source: Quinlan and Associates.)

Fixed Income Leaders’ Summit: June 7, Westin Copley Place, Boston

Key Insight: To comply with the EU’s new MiFID II policy, fixed income traders need clarity into the lineage of their data. Fixed income leaders are seeking more efficient and transparent ways to source data, assess its quality, and use it for competitive advantage.

Battle of the Quants: June 20, Carnegie Hall, NYC

Key Insight: While demand for new, unstructured data sources is rising, traditional market data hasn’t lost its importance. Finding creative ways to combine market and alternative data can lead to more valuable discoveries.

BattleFin Discovery Day: June 20, The Intrepid, NYC

Key Insight: As more and more types of data become available and the data market becomes fragmented, firms across the industry are all trying to access data and get it into a useable form. “The next step for the industry” is a managed service for these common tasks, a “utility where everyone can share,” says our CEO Philip Brittan.

Goldman Sachs Hedge Fund Technology Seminar: June 21, Goldman Sachs, NYC

Key Insight: To implement data across your organization, start with a strong data strategy. Every hedge fund needs to plan how to integrate data from multiple sources into a single point, how to map data according to a consistent set of standards, and how to distribute data to relevant users. A sound data strategy makes your fund more efficient, improves alignment among teams, and can drive profitability as well.

The Must-Have Qualities of a Platform Engineer

Spotlight: Jonathan Major, Head of Engineering and Operations

With nearly 20 years of experience in engineering and management, Jonathan Major leads platform engineering, data pipeline operations, and information security at Crux. He’s committed to mastering new technologies, building great teams, and partnering with customers to make their data delightful. We sat down with Jonathan to learn which qualities make a great engineer.

What qualities make a great platform engineer?

It’s the person who is inquisitive, who always has a learning mentality, and a builder mentality as well. Every day, I’m asking, “Have we got a process to X, Y, Z?” If not, then we build one. It’s constant learning. That’s what I like to see in engineers. Obviously, they also need to know what they’re doing—they need to be competent. We like smart, inquisitive, collaborative people.

How are you helping make data delightful for our clients?

Reliable. Robust. Secure. That’s my mantra. Coming in as one of the founding engineering members of Crux, I’ve seen this company grow. I’m learning all the time. I’m not scared to take on opportunities. I’m bringing to bear my almost 20 years of experience to make [Crux’s]  engineering and culture delightful. For our customers, it’s not only [about] their experience with the API or the UI. It’s also when I meet with them.

What’s an important lesson that you’ve learned from your experience in engineering?

Engineers have a tendency to build things as they want to use them, which can be wrong. So the lesson is to stay customer focused. It’s okay to ask questions. We want that questioning culture where people ask, “How do you feel about this?” That goes into having a context-driven engineering team.

It’s a partnership, so [it’s about] asking questions, learning from customers, and partnering with product management to really drive down to the ask. Engineers can create applications with unicorns and castles, but that’s not really what you want. The client wants a very efficient way to solve their problems.

What are your favorite things to do outside of work?

I’m family oriented. I’ve been married and have two kids who are nine and eleven, so we have a lot of family time–and travel. We just spent two weeks in Cape Cod and Montauk–that was great. Coming from Europe, I’ve got my parents coming over next week from Ireland.

My wife and I, living in the Bay Area, have got lots of opportunities to try new and interesting restaurants. So we do enjoy the local restaurant scene—although I don’t buy expensive avocado toast!


Jonathan still misses his Sinclair ZX81 computer—but these days, he puts up with engineering large-scale distributed systems. He loves “rubber ducking,” sprint planning, fixing bugs, and keeping up with new technologies. Jonathan has worked at financial firms such as IBM and Lotus in Ireland and has held leadership positions at Barclays Global Investors and BlackRock.

Charging Up Mountains

I like to think of getting projects done as ‘charging up mountains.  I appreciate the intense concerted effort and determination needed to do that, and I especially enjoy the unequivocal feeling of doneness when you reach the top of the mountain.  You know you are on top: it’s down in every direction from there (just have to be on the lookout for ‘false peaks though those are usually quickly found out when you look around a bit).

For this reason, inside Crux we refer to our Platform releases by names of American peaks that are at least 14,000 feet tall.  We are in the midst of slowly working our way up from the shortest of those (Sunshine Peak in Colorado) to the tallest. When we summit Denali, maybe we’ll do international peaks 🙂  

SafeGraph now on Crux!

We’re delighted to welcome SafeGraph’s geolocation data products to the Crux platform! SafeGraph is a geolocation data provider that maintains ground truth datasets for human movement, physical places, and visits to places of interest. Connect to SafeGraph or learn more here.

ETL → EVLS TTT

In the world of Data Engineering, the acronym “ETL” is standard short-hand for the process of ingesting data from some source, structuring it, and loading it into a database from which you’ll then use that data. ETL stands for “Extract” (ingest, such as picking up files from an FTP site or hitting a data vendor’s API), “Transform” (change the structure of the data into a form that is useful for you), and “Load” (store the transformed data into a database for your own use).

At Crux, where we are serving the needs of many clients, our approach is slightly different, designed both to gain the economies of scale that we can leverage as an industry utility, serving multiple clients across multiple sources of data, and to give individual clients the customized experience that they’d expect from a managed service to meet the needs of their specific use cases.

“EVLS TTT” stands for Extract, Validate, Load, Standardize, Transform, Transform, Transform… “Extract” here is the same as above (ingest data from its source). “Validate” means to check the data as it comes in to make sure that it is accurate and complete. If a data update fails any of the Validation tests we run on it, our Data Operators are notified and corrective action is taken immediately. Examples of validations include ensuring that an incoming data update matches the expected schema, ensuring the values in a column conform to expected min/max ranges (for example should numbers in this column always be positive), or making sure that enumerated types (such as Country) fall within the accepted set of values. Validations also test data coverage: does the date range in the update conform to what is expected? If the dataset is supposed to cover companies in the S&P 500, does it in fact cover all those companies, no more, no less? Validations also look for unlikely spikes and jumps in continuous data, identifying outliers for closer examination by our Data Operators.

“Load” is as above (store the data into a database). “Standardize” is a set of special Transformations that get the data into an industry-standard form that makes the data easy to understand, easy to join across multiple datasets from multiple vendors, easy to compare, etc. Examples of standardization include data shape (such as  unstacking data and storing it all as point-in-time), entity mappings (such as security identifiers), and data formats (such as using ISO datetimes). We do of course store the raw data exactly as it comes from the data supplier, which customers can access as easily as the cleaned/standardized version.

We leave a gap between the “S” and the “TTT…” because at that point the data has been loaded, checked, and put into a form that should suit the needs of most customers as is. We call this spot “the Plane of Standards”. The “TTT…” are any number of specific Transformations that are required by individual clients to suit their own use cases (mapping to an internal security master, joining several sets of data together, enriching data with in-house metadata, etc). Those Ts give clients the ability to have a customized experience when using Crux.

The goals of all this are to be able to gain economies of scale and mutualize the effort and cost of the commoditized parts of the process, making it cheaper and faster for everyone, while still giving clients the ability to get a bespoke output to serve their individual use cases. Crux does not license, sell, or re-sell any data itself: Data Suppliers license their products directly to customers, maintain direct relationships with those clients, and control who has access their data on the Crux platform. We work in partnership with Data Suppliers to help make their data delightful for our mutual clients.

Spring Community Newsletter – Creating Data Harmony

Hello from the CEOPhilip Brittan, Crux CEO

The Crux team has ramped up our rhythm, and our efforts are reverberating in the marketplace. As you may have heard, Citi invested $5 million in Crux earlier this year. This support from Citi affirms the resonance of our services for leading companies. Helping people work in harmony with data creates meaningful movement forward.

My musical metaphors here are intentional. An unknown fact about me: In addition to several decades working in finance and tech, I’ve learned a lot about business from many years of composing music.

A composer has to keep their vision clear in their head as they work, and make sure that each element they develop truly supports it. While building a rhythm that flows and uncovering the notes that resonate, a composer relates these technical processes back to the larger purpose of the work.

I founded Crux with a meaningful mission: to create harmony between people and data. Now my role is to make sure that every element of our business works in harmony to achieve that goal. Every day, I conduct our talent, skills, and resources to flow together towards our purpose. When I face a challenge, I remind myself how it fits into the big picture and consider how my decision today can affect our company’s core goal.

Crux can orchestrate your data supply chain to deliver more value to you. We help data flow in a way that enriches everyone. That’s music to our ears, and we hope to yours too.


Crux Insights Blog: Core Access Service

Data is an unlimited resource. It holds the potential to help companies move faster, investments produce rewards, and people make smarter decisions. If only data were easier to wrangle, your business could tap into that potential now.
Welcome to Crux, where your data is ready for action. Our Core Access service delivers your business clean, organized, normalized data instantly in the cloud. So you can say goodbye to hours of busy work in the back office, and hello to the work that creates value for your business.
How does the Core Access Service for data work? Crux’s experienced data engineering teams extract, validate, load, and standardize your datasets. Insight-ready data flows into your business, empowering your front-office teams and technology. No more need for your company to build, buy, or maintain costly data infrastructure. Just plug into Crux, and you’re ready for research, risk analysis, and trading.
Click here to sign up, and bring greater efficiency to your researchers, quants, and portfolio managers. Read on to find out more about the traditional and alternative datasets we can offer you through our single access platform and API:

Crux Community

Are you curious? Do you love solving complex problems and delivering white glove service to clients.  If so, let’s talk. The Crux team is growing and seeking highly skilled talent that believes data can be delightful. Check out our job postings here.

Have data to share? Our data supplier community is growing by leaps and bounds.  Our diverse datasets range from stock quotes to corporate trends to transportation data and more.  No data is irrelevant.  Check out our network and create a profile of your own.


Out and About

 

 

STONE POINT CAPITAL FINTECH SYMPOSIUM – April 5 | The Park Hyatt, NYC

 

 

 

BENZINGA GLOBAL FINTECH AWARDS – May 15-16 | New World Stages, NYC

 

 

MBA’S NATIONAL SECONDARY MARKET CONFERENCE & EXPO 2018 – May 20-23 | New York Marriott Marquis, NYC

 

 

 

DATADISRUPT – May 22-24 | Lerner Hall, NYC

 

February Community Newsletter – Data Engineering: Get to the Crux

Hello from the CEO

Philip Brittan, Crux CEO

Delightful data is useful and useable. Data Scientists make data useful through analysis that extracts valuable insights from the data. But first, Data Engineers make that data usable by whipping it into shape: loading it, cleaning it, normalizing it, mapping it, joining it, and other transformations that get the data ready for Data Scientists to wring value out of it.

While Data Science gets the headlines, Data Engineering is working hard behind the scenes to make the Data Science magic possible. And by working hard, I mean that Data Engineering typically accounts for 70-80% of the total effort a firm spends on making use of data. Data Science and the unique insights it delivers are business differentiators, but most firms spend a minority of the time on them.

That’s why forward-looking companies increasingly turn to a partner like Crux. By offloading their Data Engineering work, these companies give more time and energy to Data Science and move much more quickly to produce valuable new insights that power their businesses.

Crux brings laser focus, deep expertise, operational oversight, and a valuable network of data suppliers to help you orchestrate, implement, and operate your information supply chain.

At Crux, we make data delightful.

 


Crux Insights Blog

How can you keep the right data flowing into your business? It is simple: Orchestrate, Implement and Operate. Read about Crux’s three step process in our last blog post. 

 

 


Five in 5 with Head of Data Engineering Andrew Clark

Andrew Clark is Crux’s head of data engineering with a tall ask. At 6’6” he sees the full spectrum of data needs for Crux clients. With deep experience in managing unstructured data, he’s a master of data transportation, storage and repackaging.  Here are five questions in 5 minutes with Andrew:

 

What does a data engineer do?
At Crux, being a data engineer means handling the tough work that makes data more actionable for our clients, and designing the tools that make our clients’ lives easier over time. Data engineers sit on the “data wrangling” side of the pipeline, meaning we are the folks who handle the hard work of figuring out where certain elements of the dataset live, slicing and dicing data, and repackaging it for distribution.

 

How has the data engineering landscape changed in the last 5 years?
Today, the folks managing information supply chains are embracing the fact that the whole process does not need to exist on-premises anymore. While firms used to believe their data engineering was their “secret sauce”, today they realize it’s the insights they can glean that are more important. Using experts like Crux to remove as much of the tedious, upfront work as possible is now the preferred model.

 

What are you most excited about?
At Crux, we’re illustrating the art of the possible for our clients. What was once difficult has now become easy. Helping clients realize the full potential of their data is truly exciting.

 

What do you do when you are not engineering data?
I am a big outdoorsman, so my favorite activities tend to be outside. I am an avid cyclist, I have a motorcycle and am currently building an airplane.

 

What would geolocation data tell us about you?
If you were to assess my geolocation data, you’d probably find that when I am not working, I like to go to places where the population density is low. This means you’ll probably find me on my bicycle, hiking, or somewhere outdoors and away from the city.

 


Crux Community

Is it difficult to get access to useable data? Let Crux experts engineer your data to make it ready to use. Our data engineers take on your data challenges so that you can spend your time finding signals. Click HERE  to chat with our team of experts.

Have data to share? Our data supplier community is growing by leaps and bounds.  Our diverse datasets range from stock quotes to corporate trends to transportation data and more.  No data is irrelevant.  Create a Crux login HERE to browse our network and become a supplier.


Out and About

We’ve been building our community. In the past month, we’ve met with hundreds of suppliers and buyers of alternative data.

 

Quandl Alternative Data Conference | January 18, 2018
New York, NY

 

Battlefin Discovery Day Miami | January 30-31, 2018
Miami, FL

 

Outsell Data Money | February 1, 2018
New York, NY

 

AI in Fintech Forum | February 8, 2018
Stanford University, Stanford, CA

Orchestrate, Implement, Operate

 

In my last blog post, I talked about how informatics firms help companies ‘orchestrate, implement, and operate’ their information supply chains.  What exactly do I mean by that?  As an Informatics firm, this is what Crux does:

 

Orchestrate:  ‘Orchestrating’ in general means pulling together and coordinating a variety of components to work together effectively, the way the conductor of an orchestra makes sure the individual musicians are playing together effectively to bring the music to life. The first step in creating a supply chain is deciding the elements that need to go into it. This is driven by the use case of the consuming customer (hedge fund, bank, insurance co, etc). What data do they need, and in what form do they need it?  Crux works with a supplier network of partners: data publishers, analytics firms, and service providers who form the components of the supply chain that Crux implements and operates.  In some cases, a consumer may have a specific dataset or vendor that they know they want to work with.  In some cases, the consumer only knows the type of data they want and they look to Crux to help them surface potential providers of that data and possibly to run tests on candidate datasets to objectively test the fitness of that data to the customer’s use case.  Crux works with a wide range of tools and 3rd party service providers and pulls them into the appropriate set to meet the needs of the specific supply chain.  For instance, there may be a specialist who transforms the data in some specific way (akin to a ‘refiner’ in my last blog post).  Crux partners can make themselves visible to clients on the Crux platform so that customers can browse and learn about specific datasets, analytics, and services, get inspired, and express interest in exploring any of them more deeply.

 

Importantly, Crux does not sell or resell any data or analytics itself — producers and consumers can count on Crux being an objective neutral partner and producers have full control over where their data goes.  Providers license their content directly to customers and Crux acts as a third party facilitator to wire up and watch over the data pipelines, on behalf of customers, as described below.

 

Implement:  A supply chain fundamentally involves the flow of goods from producer to consumer. In the physical goods world (traditional logistics), that involves transportation, storage, and (potentially) repackaging. In the case of an information supply chain, it involves the transportation, storage, and repackaging of data.  These are the fundamental data engineering tasks that allow data to flow between parties in a way that is maximally actionable for the consumer.  These data engineering tasks generally involve writing software that ingests the data (maybe picks up FTP files, copies from an entitled S3 bucket, legally scrapes a web site, hits an API, etc.), validates it (look for missing, unrecognizable, or erroneous data), structures it (usually into one or more database tables), cleans it, normalizes it, transforms it, enriches it, maps embedded identifiers, joins it with other data, removes duplicate entries, etc., all to support the specific use case of the customer.  This is the kind of data engineering work that Crux does to implement a specific supply chain for a customer, pulling in the appropriate data providers, tools, and value-added service providers identified in the Orchestration phase.

 

Operate:  Rarely is a dataset static. The vast majority of datasets receive regular updates, whether that’s once a month, or once per millisecond.  As that data flows, constant vigilance is needed to make sure data shows up when it is supposed to, that it’s not missing anything, that it doesn’t contain unidentifiable components.  Data Operations includes the monitoring and remediation of ongoing data streams.  Crux Data Operators set up dashboards and alerts to keep a close eye on data in motion and all the systems it travels through. When a problem is spotted, they immediately begin diagnosing and remediating the issue, in tight collaboration with the relevant data provider(s), to try to get ahead of the issue before it affects downstream consumers.  Data Operations also includes handling standard maintenance tasks such as watching for and reacting to data specification changes and scheduled maintenance outages coming from the data provider(s).

 

These are the key elements of Information Supply Chain Logistics in a nutshell.  It is a rich process and gives customers tremendous leverage in harnessing the integrated value of a network of suppliers.

 

Contact Crux if you’d like to learn more.

Informatics Firms and Information Supply Chains

Philip Brittan, CEO of Crux Informatics, Inc.

One of the most revolutionary steps in the evolution of manufacturing has been the emergence of sophisticated supply chains. To understand them, first imagine how a person or a firm could create a new product by gathering raw materials and making all the parts themselves. Then imagine how pieces of that process are picked up by others who specialize in various ingredients that go into creating the finished products, such as raw materials providers, tools makers, and (eventually) component manufacturers who create standardized subsets of a product that can be assembled by multiple downstream firms to produce different end products.

Over time the raw materials become more refined (planed lumber instead of timber, jet fuel instead of crude oil, steel instead of iron), and the refiners may in fact be separate companies in the supply chain who take in raw materials and output refined materials, perhaps in several steps by several companies. Over time, tools become more sophisticated and specialized, consuming materials and tools from their own supply chains. Components become increasingly complex and comprehensive (producing larger assemblages), again consuming materials, tools, and possibly sub-components from upstream. With this evolution, manufacturing supply chains have become exceedingly sophisticated and complex, with literally thousands of companies working together to build a car, for example.

One of the key innovations needed to allow this is standards. Thanks to accepted and widely used industry standards, a screw firm can specialize in making screws for a large number of downstream firms, without each screw being a bespoke project. That specialization/focus, and the automation that’s possible when manufacturing standardized components, drives economies of scale and advances in efficiency.

Along with physical goods, supply chains eventually also come to encompass value-added services, such as consulting, metrics gathering, supplier ratings, etc. A special kind of service provider associated with supply chains is the Logistics company. The Wikipedia definition of supply-chain logistics explains that “logistics is the management of the flow of things between the point of origin and the point of consumption in order to meet requirements of customers”. Logistics firms help companies orchestrate, implement, and operate their complex supply chains. They generally work with a network of suppliers that they can bring to bear when helping a firm set up a supply chain. And they have the skills and tools to make sure that the supply chains are operating smoothly, which in the physical goods world frequently involves planning and arranging efficient transportation and storage.

In information intensive industries, such as financial services, processing information to drive valuable insights is the core “manufacturing process”. For example, financial firms of all kinds—banks, hedge funds, research houses, private equity firms, insurance companies, etc.—all take in relevant information about the world, process and perform analysis on that information, drive insights, and take action on those insights. That action can take many forms—make a loan, place a trade, rebalance a portfolio, pitch a client, author a research report, buy a company, underwrite a policy, etc, depending on the type of firm—but all firms have at their core that critical process of gathering information and performing analysis to drive insight.

Over time, the range of information that firms utilize in this core process has grown in volume, velocity, and variety. As such, firms have started to move beyond simply collecting raw material (data), to thinking about their information supply chains, an evolution that closely mirrors what we have seen in manufacturing industries. We are witnessing rapid evolution in the tools that are available to companies to process and analyze data. And a large variety of suppliers, in the form of ‘alternative’ data vendors, have sprung up to meet the ever-expanding needs of financial firms to feed their insight generation processes. One interesting feature of information supply chains is that they may be looping, meaning company A may produce some data (perhaps exhaust from a trading system), feed it to one or more refiners, aggregators, or derived-data producers, who then feed their output back to company A to use in their analytics.

These information supply chains are getting more complex and thus harder to manage, yet—to date—financial firms have generally managed them themselves. This has led to inefficiencies and redundancies across the industry. Every firm has had to become at least basically competent in data management, many have built some form of in-house platform (some well, some poorly) to help manage their data flows. And we are left with a situation where hundreds (in some cases thousands) of firms are wiring up to the same sources of data, downloading the same data, storing the same data, cleaning the same data, mapping the same data etc, independently, redundantly, with no economies of scale.

Just as Logistics firms arose to help manufacturing firms manage their increasingly complex and burdensome supply chains, a new type of firm—Informatics firms—are an inevitable evolution of the market to help companies manage their information supply chains. Informatics firms help companies discover relevant sources of data and help them evaluate that data for fitness to the needs of the firm. They implement and operate the data processing pipelines that are needed to get the information from the supplier to the customer, while validating, cleaning, transforming, mapping, and enriching the data along the way (what we might call Data Engineering) so that it arrives at the customer in a form that is immediately actionable, meaning a firm can do something with it that is pertinent for their business (what we might call Data Science), as is, without requiring further refinement. With a supply chain mentality, Informatics firms pull in the right tools and partners to get the job done.

In effect, Informatics firms ‘manage the flow of information between the point of origin and the point of consumption in order to meet requirements of customers’. Informatics firms can bring economies of scale to the industry by wiring up to a specific source of data once, storing that data once, cleaning that data once, mapping that data once, on behalf of many clients, who can share the costs of those things rather than bearing them independently and redundantly. Informatics firms can also help with the broad implementation of industry standards, which allows for more automation and greater efficiency for everyone.

Firms in information-driven industries, such as financial services, need to think of their core data and analytics workflow as their ‘manufacturing’ process and they need to think about the content that feeds that process as their critical supply chain. As they do so, Informatics firms can help them orchestrate, implement, and operate those supply chains more effectively and efficiently.