Big Data Processing
What is Mesh?
What is Mesh?
Built upon the Hadean process model and its capability for scale-invariant programming, Mesh drastically reduces the need for specially trained data management engineers.
Quants, data scientists, and programmers are all able to write and run algorithms at an arbitrary scale, whether on a process axis or memory axis, without the impediments of manual effort, time and spiralling infrastructure costs.
Data Processing and ETL for Big Data
Mesh’s dynamic scalability and opsless deployment enables the processing of large and complex data sets faster and more reliably: ETL (Extract Transform Load), data processing, and numerical computing problems are handled seamlessly.
Gain Efficiency and Reduce Cost
Mesh aims to put the handling of Big Data back into the hands of a single data scientist or engineer. Mesh offers a drastic reduction in large teams of highly trained and expensive individuals needed to build, manage and scale big data pipelines built on the Hadoop and Spark ecosystems.
Unrestricted Data Compatibility
The unrestricted data type compatibility of Mesh allows text, geolocation, IP addresses, phone numbers, matrices and tensors, or sensor readings to be used. With Mesh, it’s possible to process any data type with increased speed, reliability and ease.
A financial services organisation wanted an alternative means to perform a risk analysis across a large portfolio. Mesh was used to run a Monte Carlo simulation to complete a VaR on a financial portfolio.
“By simply changing the parameters, a single developer can run billions of simulations just as easily as a few, and significantly reduce operating costs and time to market.”
- Mesh executes predictably at scale whilst running at native speed
- Platform allows simulations to be run at any scale without changes to the source code