Mesh: Big Data Processing

Process big data like small data

What is Mesh?

What is Mesh?

Mesh is a powerful big data processing framework which requires no specialist engineering or scaling expertise.

Mesh controls and manages the flow, partitioning and storage of big data throughout the data warehousing lifecycle, which can be carried out in real-time. It uses deterministic algorithms to generate insights and drive decision making.

Big Data Processing Framework

Built upon the Hadean process model and its capability for scale-invariant programming, Mesh drastically reduces the need for specially trained data management engineers.

Quants, data scientists, and programmers are all able to write and run algorithms at an arbitrary scale, whether on a process axis or memory axis, without the impediments of manual effort, time and spiralling infrastructure costs.

Features

Data Processing and ETL for Big Data

Mesh’s dynamic scalability and opsless deployment enables the processing of large and complex data sets faster and more reliably: ETL (Extract Transform Load), data processing, and numerical computing problems are handled seamlessly.

Gain Efficiency and Reduce Cost

Mesh aims to put the handling of Big Data back into the hands of a single data scientist or engineer. Mesh offers a drastic reduction in large teams of highly trained and expensive individuals needed to build, manage and scale big data pipelines built on the Hadoop and Spark ecosystems.

Cloud Native Architecture

Unrestricted Data Compatibility

The unrestricted data type compatibility of Mesh allows text, geolocation, IP addresses, phone numbers, matrices and tensors, or sensor readings to be used. With Mesh, it’s possible to process any data type with increased speed, reliability and ease.

Customer Success

Financial Services

A financial services organisation wanted an alternative means to perform a risk analysis across a large portfolio. Mesh was used to run a Monte Carlo simulation to complete a VaR on a financial portfolio.

“By simply changing the parameters, a single developer can run billions of simulations just as easily as a few, and significantly reduce operating costs and time to market.”

Results

  • Mesh executes predictably at scale whilst running at native speed
  • Platform allows simulations to be run at any scale without changes to the source code

Related Posts

Scaling Monte Carlo Simulations On Apache Spark

At Hadean, we’re working to lower the barrier to distributed computing by making it simpler and more reliable.

C++ open source

A new Open Source C++ library for Spatial Simulations

The different approaches to representing arbitrary shapes, their impact on performance and why Aether Engine uses Morton regions.

Aether Engine's Distributed Navigation and Pathfinding AI

We’re pleased to show you our distributed navigation and pathfinding AI, which will ship with our Aether Engine SDK.

Find Out How Hadean Delivers Massively Distributed Computing

Hadean Distributed Computing White Paper

Download the Whitepaper Today