Creating a global OpenGamma Cloud Service
DuncanJohnston-Watt - 02 September, 2013
When we were asked to come up with a compelling demonstration of our application management platform for the group CTO of a global investment bank – showcasing our ability to roll out a global service across multiple regions with in region elasticity to boot – it didn’t take us long to zero in on OpenGamma as an ideal candidate having heard all about this at CEO Tales: Open Source Business Models hosted by the redoubtable Mark Littlewood.
Co-founded by Elaine McLeod, Jim Moores and Institutional Investor’s Tech 2013 Pick: Kirk Wylie OpenGamma is an open platform for analytics and risk management for the financial services industry that has garnered a lot of attention and backing from industry heavyweights such as Accel Partners.
However, given our brief, what piqued our interest was what would it take to convert OpenGamma to a genuine cloud service that could be accessed from anywhere in the world using GEO load balancing and which would scale out and scale back in any given region depending on demand. The short answer is it took our dynamic duo Alex Heneveld and Andrew Kennedy a weekend to crack this and deliver an awesome demo.
All well and good but the more interesting question is how did they go about this and what did we learn along the way?
Essentially they went through a cloud-enabling migration exercise.
- First they went to the OpenGamma developer zone where they downloaded the latest rev of the OpenGamma platform
- Then they unpacked its runtime environment and figured out the components that comprised this
- Next they created a Brooklyn blueprint – essentially reverse engineering a model that accurately captured the way these components were wired up
- Finally they enriched this model by taking advantage of Brooklyn building blocks such as load balanced clusters, fabrics and a geo load balancing service
The net result was a blueprint that cloud-enabled the original runtime environment making it possible to instantiate it as a Global OpenGamma Cloud Service.
A point to note is the introduction of the GeoScaling DNS service which is an example of an integration with a third party service. In this case it is used to provide the global URL that then routes the user of the OpenGamma Cloud Service to the load balanced cluster nearest to them.
Basically each time the AddRegion effector is called on the Dynamic Regions Fabric a new LB Cluster is created in the specified region and the URL of the NGINX load balancer is then added to the target list of the GeoScaling DNS service.
In this simple demo the AutoScalerPolicy tracks the active number of OpenGamma views being served up by the cluster and scales out and scales back the number of OpenGamma Server instances in response. (Further work is under way with Jim Moores and the team at OpenGamma to come up with other deployment patterns for OpenGamma including unpacking the implementation of the OpenGamma Server to create a compute grid and so on.)
Here we can see the OpenGamma Cloud Service instantiated in three separate AWS regions – AWS US East 1, AWS US West 1 and AWS EU West 1:
There is a thoughtful blog post by IBMer Joydipto Banerjee that proposes A Reference Model for Moving your Applications to the Cloud.
This is pretty good starting point but needs to be extended. As Alex puts it in a comment on this blog -
I’d add that in our experience, there is a fourth way the migration can be done — some call it “cloud-enabling” migration. This is where you analyse the architecture of the application and build a portable model or blueprint. It is called “cloud-enabling” because you can then very easily:
- Deploy to many different clouds — without requiring specific images or hardware profiles to be present or large file uploads; some approaches even let you target existing machines
- Replace components — substitute functionally equivalent components (for which blueprints are often available off-the-shelf); these might be newer versions of the software, cloud services (including datastores), or PaaS
- Add autonomic management to the blueprint — take advantage of the on-demand nature of cloud throughout the application’s lifecycle; this can use infrastructure as well as application metrics, and can provide KPI’s and alerts or automated failover, elasticity, even cloud-optimisation (follow-the-sun, follow-the-moon)
Emerging cloud application standards such as OASIS TOSCA will play an important role here, as do application-modelling-and-managing tools such as the open-source project Brooklyn (brooklyn.io).
[Disclaimer: I am involved with both of these initiatives.]
Feel free to check out this demo on GitHub where the code is released under the usual Apache license.
This is an ongoing collaboration with the OpenGamma team so it is well worth checking in with folks like Alex Heneveld, Andrew Kennedy and Andrea Turli on IRC where they can be found on the freenode network on #brooklyncentral channel.
For example, we are currently testing the deployment of the OpenGamma Cloud Service on the Google Cloud Platform, HP Cloud, Softlayer Cloud and last but by no means least Interoute VDC. (If this list sounds familiar it is because we are now using OpenGamma as a cloud smoke test in much the same way that we have used Cloudera in the past.)
The beauty of the blueprint we’ve created for the OpenGamma Cloud Service is that it allows us – should we choose – to mix and match these target environments if, say, we wanted to leverage different cloud service providers in different geographies or better still eliminate any dependency on a single source supplier.
Check out Jim Moore’s post Converting OpenGamma to a Genuine Cloud Service and recent coverage by A-Team Cloudsoft Lifts OpenGamma Analytics and Risk Management into the Cloud.