Programmatic Blog

Launching IDB in Asia Pacific

Written by Sam Pegler | Sep 15, 2017 8:14:50 AM

 

To celebrate our recent shortlist (09/17) within the APAC Drum Digital Trading Awards for Best Use of Performance, we spoke to IM's Production Engineer, Sam Pegler, on how we launched within the APAC region.

In July 2016 we launched IDB in Taiwan, giving us access to 30 markets across the Asia-Pacific (APAC) region. Launching IDB in APAC was a quick process, on a Thursday we were asked if we could go live within the next two weeks. By the following Tuesday all infrastructure was built and we had completed testing of our core services.

The speed at which we did this is a bit misleading, most of the challenges had been overcome months earlier when we launched into the US. This involved taking all of our existing European infrastructure and architecting out what would be a global service and what components would be regionalised and duplicated into a new US region. This modularisation of services allows us to quickly duplicate components from one region to another as we grow with little extra overhead in staff or time.

 

How we did it

With the hard work of making our applications globally aware completed for the US launch, opening a new APAC region from Taiwan was relatively simple. We’re heavily invested in automation, with all of our infrastructure programmatically defined using a couple of tools: Terraform and Saltstack.

Launching a new region started with us adding new region variables and copying over app configuration to the new data centre in Terraform. This generated all of the infrastructure required, with databases and some applications being configured using templated variables that were inherited from Terraform.

Apps internally are built to be environment agnostic, and pull their configuration from their environment. This is done by storing configuration at a regional level and pulling this into the application at startup, both Consul and Envconsul gave us the ability to do this, without having to alter configuration and build new versions.

Deploying and routing traffic for us was pretty simple, we chose to use the existing Google HTTP load balancer which routes traffic to the regionally closest data-centre, failing over any other requests to the next nearest. This gave us a huge advantage, as rather than having to regionalise public internet facing URLs or using dynamic DNS, we can just pass all of this complexity off to Google.

 

What we’d change if we did it again?

We lacked automated testing for our infrastructure at this point, this resulted in us missing a few bugs in our configuration. We’ve since started using Testinfra as a basic QA tool for all new infrastructure, which should pick up these issues in provisioning rather than during deployments.

Initially we were running a separate full ETL system in each of our regions which then pushed data to a central region. This was pretty inefficient during off-hours (even when scaled down) and was a pretty big source of technical debt that could have been reduced given better planning. However, technical debt is a spectrum, and we think in this case it was worth the cost for the reduced time to market.

Other regional quirks, such as network latency and quality within the region were something we didn’t expect to be so noticeable. Compared to our European and US activity we are having to prioritise latency over efficiency and having to over provision much more infrastructure to meet our latency targets.

 

Results

Over the past 12 months we’ve processed hundreds of billions of bid requests and billions of resulting ad impressions in the region. It's been our most reliable of all of our regions since the launch, with no issues since the launch. All together, a pretty successful project from start to finish.



Sam Pegler, Production Engineer, Infectious Media.