Why Middleware Makes Distributed Computing Difficult (And How We’re Simplifying It)
At Hadean, we set out to simplify distributed computing because we realised that the traditional approach was ignoring the most important role developers and engineers have:
The problems today’s developers and engineers want to solve require hyper-scale, distributed applications. And they want to solve these problems using the same code they’ve written locally, but across thousands of cloud servers.
And we’re not just talking about the developers and engineers at the Facebooks, Googles, Apples, and Amazons of the world. We’re talking about every business, including the one person start-up because let’s face it, every business is now a software business, and they’re looking to adopt algorithms in some shape or form whether it be machine learning, A.I., or simulation to compete with the big four.
But when the developers ship their distributed applications to the cloud, they require months of engineering effort to orchestrate, performance tune and scale up. Then, when the requirements change, the lengthy and costly process starts all over again.
That’s what we’ve been focused on solving.
But as we’ve listened to more and more businesses trying to build distributed applications, we realised that problems don’t just start at writing the code.
For a lot of teams, middleware is still the only viable solution for distributed applications.
And the reason teams rely on middleware? The operating system is obsolete.
Middleware Is Only Needed Because Operating Systems are obsolete
The limitations of using middleware for distributed computing have been a common theme in our conversations with businesses.
For years, developers and engineers have relied on middleware. We know because we’ve helped build distributed systems for advertising bidding, financial services, retail, scientific computing, and online gaming.
And while these middleware-based systems could help achieve your distributed application goals, the overall experience for developers and engineers is miserable, and often short-lived.
The cloud has been a common goal for most organisations for well over a decade, and ever since we’ve been stitching together and layering on more and more software abstractions, automation and orchestration to make many servers behave like one. That’s been the playbook.
It’s not that people want to use middleware, it’s that they don’t have a choice. Middleware fills the gaps of outmoded operating systems that have architecturally changed little in the last 40 years. This OS paradigm was designed on the premise that a given program ran on a single computer, not across an abundance of machines at your disposal.
1) Middleware isn’t real-time.
Much of this middleware was not designed with real-time performance and reliability guarantees in mind. It was never meant for use in real-time systems such as gaming but that doesn’t stop developers and engineers from treating it like it is.
They build huge application stacks employing enterprise architectural patterns such as microservices mixed generously with orchestration tools, hoping they can reach their performance goals.
But without any context of the hardware and its performance you’ll never be able to deliver an efficient, reliable, real-time experience.
2) Middleware creates bloat.
Developers and engineers have relied on middleware for so long mostly because it was the only tool they had.
As more and more businesses offered dynamic applications, devs and engineers started leaning more heavily into different flavours of middleware.
Now middleware has simply turned into a bragging game. It’s not whether you’re smart enough to deliver new value to your customers, it’s whether you’re smart enough to set-up and manage the layer upon layer of complexity the middleware creates.
Want to increase the complexity of your deployment? Just implement more middleware.
3) Middleware isn’t reliable.
This might be the worst problem of all when it comes to middleware: reliability.
As a result, developers and engineers end up in a cat and mouse game of re-engineering, and performance tuning. It may take months to scale an application from 100 servers to 150.
There are more than 26 million lines of code in Linux and Windows is north of 50 million, then layer on top the middleware, and on top of that your application. The valuable performance and reliability your application needs are now a million miles away.
With the traditional tools, you’re almost guaranteed to create engineering roadblocks no matter how cautious your engineering. And that, in turn, leads to huge compromises and missed opportunities for the business.
As the application cannot trust the reliability of the system, complex redundancy and recovery mechanisms need to be introduced. This adds code bloat as well as behavioural complexity to the running system. It dramatically increases the chance the system will misbehave and/or crash.
The Future of Distributed Computing
In order to address these problems, we’ve built a next generation platform that’s reliable at the core, provides real-time guarantees, designed by default for massively distributed tasks, and that doesn’t create unnecessary bloat and complexity.
Imagine if a developer could write a 100 line program, and deploy it with a single command to run on the 8 cores of their desktop, and then, with a slight tweak of the command-line argument, deploy it and watch it dynamically scale across hordes of cloud servers. That application wouldn’t need months of performance tuning, nor would it require any middleware or orchestration.
You’d finally be delivering that real-time experience your business has dreamed of.
Developers and engineers wouldn’t have to worry about writing monitoring solutions for bespoke deployments, and they wouldn’t have to worry about unreliable middleware. As soon as the business defined the requirement the developer could work on creating value.
And finally, imagine if the platform was smart.
Imagine if, instead of thinking like one machine, the platform knew that it was part of a cloud deployment. Imagine if it had the context to know that once it had run out of resources, it could automatically call on more, either by spinning up more of the same servers (horizontal scaling) or instead choosing to spin up more powerful servers (vertical scaling), because it knew that the workload had changed.
Not only is it smart enough to know when and how to scale up, but it’s also frugal enough to know when to scale back down – freeing up resources. and saving money when they’re no longer required. No more provisioning headroom for those events that you know are coming, but you don’t know when. You only provision and pay for the cloud resources that you actually need.
You’d be able to deliver more value, better experiences, with fewer outages — and that’s exactly how we picture the future of distributed computing for developers and engineers.
DownloadThe Hadean Architecture White Paper today and discover a new alternative to existing distributed computing methodologies.