Using Corporate Action Data, Reference Data, and End of Day Prices to Research and Monitor Securities Worldwide

Supplier Spotlight

EDI-Logo

Crux’s mission is to help data flow efficiently between data suppliers and data consumers, and we look to highlight major trends and developments impacting both parties. Today’s spotlight features EDI. Our Q&A was conducted with Jonathan Bloch, CEO.

Stock pricing data

What’s the quick intro on EDI?

Exchange Data International (EDI), founded in 1994, is a global provider of corporate actions, reference data and end of day prices for global equities, fixed income, listed and OTC derivatives. 

The cornerstone of our success lies in our expertise in integrating, aggregating and the flexibility in delivering structured data to facilitate investment research, administration and processing as well as our ability to fit our clients’ operational requirements.  

EDI covers reference, corporate actions and end of day pricing data which is available for equities, exchange traded funds and derivatives. 

What clients do you work with now and what are some of their use cases with EDI data?

EDI provides corporate action and reference data to financial institutions worldwide including hedge funds, brokerage firms, market data vendors, front end trading platforms, middle and back offices systems and quant firms. These firms use our data to monitor changes in securities based on events that will impact those securities, providing an accurate inventory of listed securities worldwide.

Our clients are Hedge Funds, Fund Administrators, Index Providers and Service Providers:

  • Hedge fund managers and investors use our corporate action events, and dividends contribution.
  • Index providers utilize our corporate data to adjust their index weightings.
  • Service providers adopt our corporate actions and reference data to adjust the data that they distribute to their clients and display on their websites. 

What are the key datasets that EDI offers? 

EDI’s key product offerings continue to be corporate actions, reference data and end of day prices. We started with covering equities in 2005 and added fixed income in 2007. EDI began coverage of listed and OTC derivatives in 2017. Our goal is to cover all potential asset classes and we are looking to begin coverage for municipals, structured securities, including ABS/MBS, and non-US mutual funds by the third quarter of 2020. 

You mentioned EDI’s corporate actions data. Would you elaborate a little bit more about that offering?

EDI’s Corporate Actions cover events that will impact listed securities worldwide. Examples include mergers, stock splits, spin-offs, name changes, new listings/de-listings, etc. EDI covers Worldwide Corporate Actions including dividends on all equities worldwide. This is available in proprietary format, as well as ISO 1502 format.

As these events impact our clients’ portfolios, we provide timely updates on announcements and subsequent changes. This includes key data like effective dates so the investment community can update their systems and share timely information to their clients. Our corporate action service covers all global equity and fixed income markets dating back to 2007.  

EDI also has a service that provides corporate action updates for listed options based on the underlying equity corporate action event that will impact that option contract. 

What is there to know about EDI’s reference data?

EDI’s security reference data feed includes all of the typical identifiers like ISINs, Sedols, FIGIs/Bloomberg symbols, exchange tickers and US code and is updated four times a day based upon the closing of the US, European and Asian Pacific marketplaces.

This service currently covers equities, fixed income, options and futures worldwide. EDI also offers a Security Reference file which covers all the major codes/Symbols, such as SEDOLs, ISINS, Bloomberg Open FIGI, CUSIP. 

Would you be able to explain EDI’s end of day pricing data?

EDI’s end of day prices are available by exchange or by a portfolio of securities.

This service is available within hours of the close of global exchanges. For certain markets, including the US, EDI provides an unconfirmed file which is available within 30 minutes after the close of the market. 

Adjusted Closing Prices compare historical pricing data with current prices and conduct detailed analysis with either Adjusted Closing Prices or adjustment factors data feeds.

End of Day Pricing Data covers over 170 exchanges worldwide providing clients with quick access to extensive and accurate closing pricing data. 

Are there other files or feeds that we should know about?

EDI’s adjustment factors file is used by our clients to adjust historical prices based on corporate action events that would create wide variations in price if not adjusted to account for the corporate action event. 

This allows clients to compare historical trading patterns and run trading analytics based on corporate actions events. These files are delivered at the end of each day within hours of the close for all global equities exchanges and reflect events that impact securities like spin-offs, dividends and stock splits on the effective date of that event. 

EDI’s evaluated pricing service covers 2.7 million global fixed income securities daily including corporates, governments, government agencies, municipals, ABS/MBS securities. 

What is the challenge that Crux is helping you solve with your clients?

The biggest problem for financial institutions today is integrating data feeds. Not only does every data feed require a different loader, but feeds also need to be matched to existing data feeds. 

From your experience, what are you seeing as challenges for your data user clients?

The challenges are threefold;

  • The cost of replacing an existing feed and the integration of a new one
  • The backlog of the internal IT department 
  • The time taken to carry out the integration work

A trend we’re seeing is that the consolidation within the industry is forcing clients to use big data providers whose fees continue to increase.  Exchanges continue to increase fees and add excessive royalty fees for redistribution. We see increased interest from clients in taking new and different data feeds in the face of these increasing fees for traditional feeds. 

We also see an increased interest in APIs. As the sheer volume of data increases, clients only want to receive what they require.

What does the future look like for EDI?

EDI is focused on adding complete coverage of all asset classes, looking to cover municipals, ABS/MBS and non US mutual funds by the end of 2020. We are looking at different ways of delivering data including adding an API for all of our datasets. We continue to see growth in the North American markets which now accounts for over 70% of our revenues. 

We are looking to increase our sales presence in North America and open operations in Europe and the Far East in the future.

To receive these updates, join our community.

Serving Point-of-Sale Data with additional Ingestion Delivery Mechanisms

Supplier Spotlight

GfK Logo

Crux’s mission is to help data flow efficiently between data suppliers and data consumers, and we look to highlight major trends and developments impacting both parties. Today’s spotlight features GfK. Our Q&A was conducted with Cedric Mertes, Commercial Director of GfK Boutique.

Gfk Technology and Consumer Durable Data

What is GfK?

GfK stands for “Growth from Knowledge,” and this credo exemplifies how the company has served its clients over the past 85 years. GfK, a leading market research firm, tracks point-of-sale data at the most granular, SKU level from retailers, resellers, carriers, value-added resellers and distributors in over 75 countries on a weekly and monthly basis, giving clients the information to grow their businesses.

What type of clients do you work with?

GfK works with a range of clients, from manufacturers, to retailers, hedge funds, investment managers. GfK Boutique works directly with the investment managers, all primarily in Tech and Consumer Durables segments, including:

  • Component/Semiconductor Suppliers
  • Handsets
  • PCs/Tablets
  • GPUs
  • CPUs
  • Home Appliances
  • Home Audio
  • Action Cameras
  • Enterprise Software
  • IT Security
  • Enterprise Storage
  • Networking Equipment
  • Servers
  • Digital Cameras & Lenses
  • Action Cameras
  • Navigation Devices
  • IT Peripherals
  • Contact Lenses
  • Printers & Cartridges
  • Tires
  • TVs
  • Gaming Consoles & Software
  • Watches
  • Wearables

What are some examples across sectors of how GfK data is used?

  • Analyzing 5G penetration and content wins/losses in terms of end-customer adoption of 5G phones and which component suppliers are gaining dollar content in those devices.
  • Looking at memory DRAM and NAND content growth from an end-demand standpoint and their impact on supply and prices.
  • Identifying the end-demand success or failure of recent smartphone launches from Apple, Huawei, Xiaomi and Samsung, as well as their subsequent impact on the component suppliers of these devices across the Handset, TV and Wearable categories.
  • Identifying how the Gaming PC, Console and Server markets drive overall CPU and GPU demand and the impact on Intel, NVIDIA and AMD; analyzing the adoption of new gaming consoles from Sony and Nintendo, researching the success or failure of new product launches from Activision and Electronic Arts.
  • Monitoring share shifts at Sonos, Garmin and Logitech based on promotional activity ahead of key shopping events such as Singles’ Day, Black Friday, the Lunar New Year, etc.
  • Quantifying the success of Alcon’s new Daily Contact Lens and its impact on CooperVision’s market share.

What trends are you seeing in the industry?

Many of our clients have new data analyst teams that prefer an FTP feed that allows them to manipulate the data themselves. Given the number of alternative data sets that have come to market over the years, we recognize many of our clients are looking to ingest as much raw data as possible.

However, given the continued demand for GfK’s traditional fundamental products and analyst support, GfK has partnered with Crux to expand its resources to support additional delivery mechanisms, rather than shift resources from its traditional products; Crux has been instrumental in allowing GfK to better serve its clients.

How has the partnership with Crux affected the user experience?

By partnering with Crux, GfK is able to work with additional clients on a platform that is already built out, which eliminates the onboarding process and all the accompanying frustrations.

What has GfK’s experience with Crux been?

Working with Crux has been an excellent experience thus far! The team is extremely diligent, responsive and timely throughout the data onboarding process. All forms of communication were clear and concise. GfK has an extremely granular data set and a large quantity of files, which can be challenging to traditional clients. Crux has been able to eliminate that challenge and streamline the process. GfK will be launching new products outside of the consumer/tech industry and looks forward to working on additional datasets and clients with Crux in 2020!

To receive these updates, join our community.

Developing Proprietary Models using Natural Language Processing and Machine Learning Strategies

Supplier Spotlight

Brain proprietary models

Crux’s mission is to help data flow efficiently between data suppliers and data consumers, and we look to highlight major trends and developments impacting both parties. Today’s spotlight features Brain, which just released a new alternative dataset “Brain Language Metrics on Company Filings” based on Natural Language Processing analysis of 10-K and 10-Q reports for the largest US stocks. Our Q&A was conducted with Francesco Cricchio, PhD and Matteo Campellone, PhD.

What is Brain?

Brain is a research company that develops proprietary signals and algorithms for investment strategies. Brain also supports clients in developing, optimizing and validating their own proprietary models.

The Brain platform includes Natural Language Processing (NLP) and Machine Learning (ML) infrastructures which enable clients to integrate state-of-the-art approaches into their strategies. All of our software is highly customizable to support the investment approach of our clients.

Our system incorporates alternative data and evaluates its relevance to financial models. 

What are some examples of signals that investors should know about?

Two good examples are the Brain Sentiment Indicator (BSI) and the Brain Machine Learning Stock Ranking (BSR).

The BSI is a sentiment indicator of global stocks produced by an automated engine that scans the financial news flow to gain a deeper understanding of the dynamic factors driving investor sentiment. This indicator relies on various NLP techniques to score financial news by company and extracts aggregated metrics on financial sentiment.

The incorporation of BSI rankings helps clients build quantitative strategies that include both sentiment and short-term momentum indicators. On a longer time horizon, the application of BSI adds value to strategies that seek companies that are under- or over-priced due to very low or very high sentiment. 

The BSR is used to generate a daily stock ranking based on the predicted future returns of a universe of stocks for various time horizons. BSR relies on machine learning classifiers that non-linearly combine a variety of features with a series of techniques aimed at mitigating the well-known overfitting problem for financial data with a low signal to noise ratio. This model uses a dynamic universe that is updated each year to avoid survivorship bias.

The incorporation of BSR enhances quantitative models and long/short strategies by adding a stock ranking that non-linearly combines stock specific market data with market regime indicators and calendar anomalies using advanced ML techniques.

How are you different than other firms? 

We’ve developed a scientific and rigorous approach based on our years of research and our experience implementing statistical models in state-of-the-art software.

We try to be as rigorous as possible in our models, which is especially important in extracting information from financial time-series data, where the signal to noise ratio is very low and overfitting risk is very relevant in validating meaningful signals.

What are some use cases for your data?

We offer a two-fold solution for clients. Financial firms can combine our systematic signals (BSI) with their own proprietary signals or algorithms to create a more complete model and perform back-testing validation. 

Alternatively, clients can come to us as a consultancy to support or validate a specific methodology or create a signal they can backtest for their hypotheses using ML or other advanced statistical techniques.

We also develop proprietary signals based on market and economic factors. One example of this is asset allocation models that try to capture risk-on and risk-off phases in the market.

What trends are you seeing in the market?

There are a number of providers of similar signals today. We see greater adoption of NLP-based sentiment signals being increasingly adopted in the market. Some providers are moving towards offering integrated platforms based on their technology and also using graphical interfaces. Our differentiator is that we are focused on continuously enhancing our algorithms. When deploying our integrated solutions, there is a lot of value in the customization of the product for each client.

Brain’s proprietary NLP algorithm uses semantic rules and dictionary-based approaches by looking at financial news to calculate sentiment on stocks. Beyond traditional sentiment data, we also developed other language metrics — like language complexity in earnings calls or similarity of language in regulatory filings — to investigate the correlation of these metrics with the company’s financial performance. 

Great, so who are your target clientele?

We have two main client groups. Large, global quant hedge funds look to us for our raw datasets. Other investment companies look to us for customized solutions. Based on our platform, we will create and integrate for them.

What are your backgrounds? 

As the Co-Founders of Brain, we share a common background in Physics and research. We focused on nurturing this as a common thread throughout the team. 

Matteo, Executive Chairman and Head of Research worked as a Theoretical Physicist, in the field of statistical mechanics of complex systems and non-linear stochastic equations. After receiving his Ph.D in physics and years dedicated to research, he obtained an MBA at IMD Business School. He then went on to work in various areas of finance, from financial engineering to risk management and investing.

Francesco, CEO and Chief Technology Officer, obtained a Ph.D in Computational Physics with a focus on solving complex computational problems with a wide range of techniques. He then focused on using ML methods and advanced statistics in the industrial space. Francesco’s technological know-how underpins the industrial machine learning solutions we deploy in our robust production environment.

What led you to work with Crux?

We believe that the partnership with Crux is a particularly good fit since Brain develops alternative datasets based on NLP and ML techniques while Crux builds and manages pipelines from data suppliers to its cloud platform. Thus, we are excited that our datasets will be delivered effectively to clients without performing different types of integration procedures for each new client we onboard. We rely on the Crux platform to help us scale our products more efficiently.

Thanks so much! Really enjoyed chatting with you both.

About Supplier Spotlight Series


In our mission to help data flow efficiently between data suppliers and data consumers, we look to highlight major trends and developments impacting both parties. The ‘Supplier Spotlight’ series is an impactful content series focused on sharing the latest developments by suppliers and their datasets delivered by Crux.



To receive these updates, join our community.

Managing and Monitoring our Data Pipelines

Data Operations (DataOps) at Crux

DataOps at Crux, monitoring data pipelines

Beyond the core Crux Deliver feature of building data pipelines, a critical element of our value proposition is ensuring that the data pipelines are well-maintained. We sat down with Tim Marrin, Director of Data Operations to learn how they keep over 1,000 dataset pipelines running smoothly.

First off, what is the scale of what Crux is doing?

Did you know that the Crux team currently processes tens of thousands of data instances (or ingestion pipelines) over a 24-hour period? Each of these data instances can have multiple discrete tasks and some data instances can even have over a hundred discrete tasks. That’s not an insignificant number of activities and processes running through our pipelines. 

What is the role of DataOps in this?

One of the jobs of our DataOps team is that it monitors these tens of thousands of ingestion pipelines per day. Pipelines are comprised of tasks that are orchestrated and run on cloud-based microservices that are also monitored by the team to ensure consistent, reliable and fast processing delivery of data.

What is the full range of services that Crux DataOps provides?

The easiest way will be to bullet this out. We have:

  • 24/7 global support that monitors all data feeds
  • Cloud-based big-data stores and microservices for data processing and storage
  • Dedicated telephone, email, ticket-based support based on client preference
  • Incident and problem management with regular status updates to clients
  • Identification and proactive planning for format and schema changes with suppliers and data consumers
  • Fully managed and automated CI/CD infrastructure with canary deployments
  • Automated notifications through various delivery methods to show data availability and metadata
  • Full support and monitoring of supplier data feeds – handle support of suppliers on behalf of data consumers
  • Full transparency with reporting and analytics on production incidents and outages

What are example errors that the DataOps team monitors for?

This is a question that comes up with our clients and prospects and these are the ones that we lean on:

  • Data validation issues
    • Invalid schema
    • Invalid datatype
    • Missing values 
  • File delivery timing issues
  • Market and Calendar holiday related failures
  • Remote source not available (supplier ftp, supplier late with files, incomplete files, etc.)
  • Schema issues due to unscheduled changes

What are some of the challenges that our DataOps team is working on? 

The big challenge is supporting our clients, platform and data simultaneously while we scale the number of data instances that we’re continuing to monitor. As we grow, the complexity increases with different instances, consumptions methods and SLAs we work with. These challenges can literally keep me up at night, but also keep us at the top of our game.

How does this compare to what you were doing previously?

I spent 15 years working on trading floors, most recently with the Electronic Trading SRE team at Goldman Sachs. We faced similar issues of scale, managing complex systems, real-time processing, and high throughput. I honestly find myself drawing upon that experience on a daily basis as we try to solve for even greater challenges here at Crux.

With this complexity, why should a firm work with us to outsource their data pipeline operations?

We’re leveraging performant big data and cloud solutions combined with best practices in ITSM (IT Service Management) to provide a very high level of services and technical solutions that is unique. The challenges of scaling to manage the volume of datasets is what’s driving the industry interest in Crux and the DataOps team is meeting the challenge through innovative engineering solutions. I’m really proud of our ability to solve these technical challenges while providing white-glove client experience in a startup.

To learn more about our Crux DataOps and get started, fill out this form:



Delivering Technology Industry Data Without Friction

Supplier Spotlight

International Data Corporation (IDC)
International Data Corporation. Crux Supplier Spotlight Series

What is the background on IDC? 

IDC has been in business for over 50 years. We do comprehensive and very deep research on the Technology industry. 

The bulk of IDC’s business is with large technology vendors who rely on IDC’s industry data and research to make strategic decisions around product development, competitive positioning, new investment opportunities, market entry, product positioning, etc. IDC has also been working with financial clients for many, many years. On the buy-side, that tends to be investors doing deep fundamental work in the tech space, namely long/short discretionary hedge funds, activists, and private equity clients.

What are IDC’s key benefits for clients? 

IDC has unique and highly structured data that cannot be found in SEC filings, company disclosures, on the internet, or elsewhere in the public domain. IDC data sets provide an independent, comprehensive, and coherent picture of technology markets worldwide, which is understood by our clients to be the best proxy for ground truth available.

This is a good point to speak to the key categories of data that IDC offers. What schemas does IDC utilize (frequency, type, etc.)?

IDC has 25 distinct data sets (called Trackers) oriented around various worldwide technology markets (e.g. Mobile Phones, PCs, Servers, Storage, Switches & Routers, Public Cloud Services, Cloud Infrastructure, Software, etc.). Collectively, IDC covers nearly 3,000 technology firms globally, of which over 600 are publicly traded. The data offers comprehensive and very granular insight into Tech-industry fundamentals (e.g. revenue, unit shipments & capacity shipments by vendor, segment, country, price band, channel, form factor, etc.). 

From a geographic standpoint, IDC has country-level data on up to 110 countries. The frequency of the data is monthly, quarterly, or semiannually, depending on the data set. Historical data varies by technology market depending on the maturity of the technology and typically extends back close to the inception of the market. For instance, PC data goes back to 1995, while Mobile Phone data extends back to 2004. IDC’s tracker data is supported by detailed industry taxonomies and underpinned by a rigorous data collection methodology involving a top-down and bottom-up approach, where data is reconciled with direct contacts with leading technology vendors reviewing information gathered from our extensive regional and local relationships, resources, and data sources.

IDC has more than 1,100 technology analysts and research offices in over 50 countries. We leverage a multitude of data sources, including published financial statements and public data, import records, contract details, 3rd-party data from OEMs, component vendors, platform suppliers, and other channel and supply-chain constituents. IDC also curates information from distributor data feeds, import/export records, and pricing data scraped from the web. IDC analysts have over 85,000 vendor client interactions every year and leverage extensive consumer and B2B surveys, with over 350,000 respondents annually.

Can you provide examples of questions that a data consumer might be interested in answering with IDC data?

IDC helps investors understand the size, structure, and competitive and growth dynamics of various tech markets. The questions can include: Who is winning in particular market segments and geographies? Who is gaining traction in various channels? What are the key form factor trends, and pricing dynamics? How many units or how much capacity is being shipped? What is the size and age of the installed base and who/what is vulnerable for displacement by new technologies and competing vendors? Which workloads are moving to the cloud and which still have longevity with on-premise deployments? What architectures and components are being used in new data center construction? Which vendors sold what devices at what prices through which channels last month in China?

Historically, our discretionary clients are using IDC’s tracker data to build sophisticated market models to identify market share gainers and donors to surface long/short ideas and paired trades. The newer quant-oriented use cases might focus on sector rotation (e.g. overweight/underweight Tech), segment rotation within Tech (e.g. Semis vs Software), using market-share data to enhance “quality” factors in factor models,  or even using tracker data as inputs for global macro insights.

So you’ve traditionally worked with technology focused investors, what are some trends you’re seeing with them?

Over the past couple years, we’ve seen our traditional clients becoming increasingly focused on extracting insights from data in more efficient and often more automated ways. On an individual level, investment analysts are becoming more adept with programming languages, statistics packages, and analytical tools, so they are doing heavier lifting with large amounts of raw data. Many discretionary firms are creating centralized functions with data science teams to help fundamental analysts with screening, idea generation, and/or new quant-based insights into existing holdings. Some firms are embedding quants with their fundamental teams. Overall, fundamental analysis is becoming more data intensive and automated. 

At the same time, IDC is working with clients that have full-blown quantitative and systematic mandates. Interestingly, these firms are trying to find ways to acquire more domain expertise, and we can help them do that. So, these previously distinct skill sets (quant/systematic and fundamental/discretionary) are converging along a spectrum of capabilities and mandates. These evolving discretionary use cases and new systematic clients all require more automated, precise and timely delivery of data in various formats. 

We spoke about duplication of work in the past, can you tell me a little more about this?

For IDC customers and prospective customers, their technical folks may encounter a learning curve and upfront work to understand how to ingest our data into their processes. This requires a lot of back-and-forth with IDC’s technical folks. This happens across all of IDC’s clients for all the data vendors they work with. This multiplicative effect is time-consuming and strains their internal technical staff which ultimately inhibits IDC’s clients’ ability to scale their data onboarding, ingestion, and production operations. 

When it comes to ingesting data, can you elaborate on that?

Each of these new and evolving use cases require different delivery requirements of raw data to our clients – flat files via FTP, cloud buckets, or APIs. In addition to delivery, on IDC’s end, there is a lot of work that goes into the preparation of data to take it from raw form to a form that is actionable. This is where Crux comes into play. 

Let’s talk quickly about your experience working and delivering data with Crux?

The two pieces that Crux is helping to address for IDC and IDC’s clients are:

  • The back and forth friction that is required to get a new data set stood up and
  • The ability for our clients to ingest the data into their systems in the format(s) they want

We’re still ramping up our relationship with Crux and just finished the first part, but we’re excited to now be able to deliver our data sets through a variety of file formats quickly to our clients. Onboarding our datasets with the Crux team was relatively painless.

What does the future hold for IDC?

Overall, we think IDC data provides a perfect framework and foundation for creating complex data ensembles or unlocking the value of higher-frequency alternative data in the Tech space. In fact, Yin Luo’s Quantitative Research group at Wolfe Research back tested our data and wrote a report last year on some potential applications of IDC data for systematic investors that is very interesting. In our view, IDC data is still tremendously underutilized by Buy-side clients, particularly for these new types of use cases. Now that we’re working with Crux, our existing and potential clients have a streamlined way to explore, test, ingest, and ultimately exploit IDC data, and we’re very excited about that.

Q&A conducted with Brian Murphy, Director, Financial Sales at IDC.



About Supplier Spotlight Series


In our mission to help data flow efficiently between data suppliers and data consumers, we look to highlight major trends and developments impacting both parties. The ‘Supplier Spotlight’ series is an impactful content series focused on sharing the latest developments by suppliers and their datasets delivered by Crux.

To receive these updates, join our community.

Crux Hits 1,000 Datasets!

Crux-dataset-partners

Crux Community,

We are very excited to announce we now have over 1,000 datasets (some small, some massive) pumping through our pipelines and being delivered to data consumers, with new datasets being added every day! We are honored to have partnerships with over 60 leading data suppliers of a wide range of valuable data. It has been a very busy year for us here at Crux. We are moving and growing fast. We officially launched our platform in March of this year (can’t believe it’s only been 7 months!) and since then we have been cementing partnerships, building data pipelines, growing the team, forging strategic relationships, and of course delighting customers.

As part of delighting customers, we have built out more ways for them to easily consume ingested, validated, and processed data from Crux: in addition to the RESTful API and Python client we launched earlier this year, we now have the ability to push data directly into a customer’s Snowflake or Amazon S3 account and we will shortly release the ability to execute SQL queries and create ODBC connections directly against Crux-delivered data in the cloud. Just as we keep wiring up more and more datasets, we will keep building out new ways for customer to easily get their hands on that data.

Our team has been growing fast to handle all this, especially in Engineering, and we have been thrilled to have Mark Etherington join us as CTO this summer. Mark and team have hit the ground running and are ramping up Crux capabilities fast on numerous fronts.

We know that macroeconomic data is fundamental to most activities in financial services. We are pleased to announce that we are offering free delivery of a collection of public macroeconomic datasets for all customers. If you are already using Crux, we hope you will find these datasets a nice value-add. If you are new to Crux, these datasets are a great way to get started with us. Connect with us to get access today.

Thank you all for being an engaged community. We learn from our conversations with you every day and that helps us continually improve on our quest to make data delightful.


All the best,
Philip

Welcome Vertical Knowledge to Crux!

Vertical Knowledge Datasets delivered by Crux

Vertical Knowledge datasets now delivered by Crux! Vertical Knowledge provides rich historical libraries of auto, retail, real estate, travel, business intelligence, and other open source collected web data. Combining the power of the Vertical Knowledge web collection engine with the flexibility and depth of the Crux data management platform transforms the way public sector and commercial institutions identify, reference, and analyze open source data to solve their most difficult business problems. Get datasets delivered here.

Welcome IDC to Crux!

Welcome IDC to Crux

IDC’s global technology research data can now be delivered by Crux. IDC is the premier global provider of market intelligence, advisory services, & events for the information technology, telecommunications, and consumer technology markets. Learn more here.

Welcome GTCOM to Crux!

GTCOM Sentiment Data delivered by Crux

GTCOM-US and their JoveBird Sentiment Data now delivered by Crux! GTCOM’s advanced NLP & semantic computing technologies analyzes global news & social streams providing corporate users with comprehensive scenario-based solutions. Get datasets delivered from Crux.