Mesh: Big Data Processing

Process big data like small data

What is Mesh?

What is Mesh?

Mesh is a powerful big data processing framework which requires no specialist engineering or scaling expertise.

Mesh controls and manages the flow, partitioning and storage of big data throughout the data warehousing lifecycle, which can be carried out in real-time. It uses deterministic algorithms to generate insights and drive decision making.

Big Data Processing Framework
Hadean Process Model Framework Diagram
Built upon the Hadean process model and its capability for scale-invariant programming, Mesh drastically reduces the need for specially trained data management engineers.

Quants, data scientists, and programmers are all able to write and run algorithms at an arbitrary scale, whether on a process axis or memory axis, without the impediments of manual effort, time and spiralling infrastructure costs.

Built upon the Hadean process model and its capability for scale-invariant programming, Mesh drastically reduces the need for specially trained data management engineers.

Quants, data scientists, and programmers are all able to write and run algorithms at an arbitrary scale, whether on a process axis or memory axis, without the impediments of manual effort, time and spiralling infrastructure costs.

Hadean Process Model Framework Diagram

Features

Hyper Scale Computing

DATA PROCESSING AND ETL FOR BIG DATA

Mesh’s dynamic scalability and opsless deployment enables the processing of large and complex data sets faster and more reliably: ETL (Extract Transform Load), data processing, and numerical computing problems are handled seamlessly.

Cloud Native Architecture

GAIN EFFICIENCY AND REDUCE COST

Mesh aims to put the handling of Big Data back into the hands of a single data scientist or engineer. Mesh offers a drastic reduction in large teams of highly trained and expensive individuals needed to build, manage and scale big data pipelines built on the Hadoop and Spark ecosystems.

Cloud Native Architecture

UNRESTRICTED DATA COMPATIBILITY

The unrestricted data type compatibility of Mesh allows text, geolocation, IP addresses, phone numbers, matrices and tensors, or sensor readings to be used. With Mesh, it’s possible to process any data type with increased speed, reliability and ease.

Customer Success

Financial Services

A financial services organisation wanted an alternative means to perform a risk analysis across a large portfolio. Mesh was used to run a Monte Carlo simulation to complete a VaR on a financial portfolio.

“By simply changing the parameters, a single developer can run billions of simulations just as easily as a few, and significantly reduce operating costs and time to market.”

Results

  • Mesh executes predictably at scale whilst running at native speed.
  • Platform allows simulations to be run at any scale without changes to the source code.

Financial Services

A financial services organisation wanted an alternative means to perform a risk analysis across a large portfolio. Mesh was used to run a Monte Carlo simulation to complete a VaR on a financial portfolio.

“By simply changing the parameters, a single developer can run billions of simulations just as easily as a few, and significantly reduce operating costs and time to market.”

Results

  • Mesh executes predictably at scale whilst running at native speed.
  • Platform allows simulations to be run at any scale without changes to the source code.

Hadean, The Power Behind Mesh

Removing the need for middleware, Hadean’s cloud-native distributed platform provides limitless scale in real-time with cross-cloud distribution. 

Related Posts

Scaling Monte Carlo Simulations On Apache Spark. Can We Do Better?

The concept of big-data is straightforward - run relatively simple algorithms where the data-sets are so large that many machines are needed to hold it. The implementation, however, is surprisingly...

A Simple and Predictable Big Data Stack

Where things stand today It has been noted that the complexity of big data frameworks like Hadoop and Spark make them less productive tools than their small data counterparts: Python, R and Excel....

Find Out How Hadean Delivers Massively Distributed Computing

Hadean Distributed Computing White Paper

Download the Whitepaper Today