My Experience At Native Touch

I joined Native Touch in May 2015 as a Software Engineer Intern. My first task was to write end-to-end tests for the company’s real time bidding system. Since the system is written in AngularJS and Ruby on Rails, we decided to go with Protractor.  My relationship with protractor was more of “love-hate” but I was able to finally to get a hold of it after some time.

I became a full time employee of Native Touch in October 2015.  As at then, we were basically using Github and CircleCI for automated testing. After some time, due to certain limitations, we decided to implement a more robust pipeline from development production. There were a lot of ups and downs during the 6-month long migration as we spent most of time investigating and making best choices. However, overall, I was able to learn a few things.

For the deployment pipeline, the following tools are being used:

  • Github:  as the central repository for all code.
  • Reviewable: for code reviews.
  • Jenkins: for automated testing and deployment.
  • Packer: for building and compiling Amazon Machine Images (AMIs) for servers. This tools allows for using the same AMI for multiple environments.
  • Ansible: as Packer’s provisioner.  In other words, through Ansible, it is possible to run commands (for example install packages) on the AMI.
  • Terraform: for spawning new machines from the AMIs generated by Packer.
  • Docker: for maintaining consistency in the different environments.
  • Consul: for managing environment variables

Now, how does code go from development to staging?

  • Code is committed and pushed to Github as a pull request.
  • There are two hooks on Github: one for Jenkins to run tests (RSpec, Jshint, Jasmine, Rubocop, Protractor) on the pull request and another for Reviewable. Before code can be merged, at least two people must review it and certify that the code is  “good“.
  • After the code is merged, a nightly deployment job in Jenkins kicks in. This job involves rebuilding the AMI if there has been any changes, deploying the merged code to QA, running feature tests (Protractor) over it and if successful, deploy the code to staging for manual testing by AdOps.
  • Once the code is certified as “good for deployment“, it is tagged.

How about from staging to production?

  • The pipeline responsible for deploying to production  requires an AMI.
  • Once the AMI is provided, the pipeline creates a database backup from production and stores it on S3.
  • Next, the AMI is deployment on a worker (we call it “Canary“) machine. This machine is responsible for running background (cron and asynchronous) jobs. We decided to separate this because production machines were part of an auto-scaling group and race condition for background jobs was a risk.
  • After the deployment to the worker machine is done, health checks are done. The focus here is to ensure the machine can access other services for example ElasticSearch and Redis.
  • Once the health checks pass, deployment to production is done. Terraform usually spawns new machines and then destroys the old machine thereby ensuring there is 99% up-time during deployment.

In addition, I was also able to work on some other interesting aspects of the bidding system. For example, I improved the search algorithm for fetching records from Elastic Search on both the front-end and back-end resulting in a satisfactory feedback from AdOps team.

In general, my experience at Native Touch was a wonderful one (I recently joined ruby). Not only did I grow in my career, I was able to learn other soft skills such as working in a team and giving rock-solid presentations.

And that is that. Do you have more to add or questions to ask? Share your thoughts below in the comment section.

Implementing A Timed Queue Executor In Spring Boot

Let’s say you want to store tasks in an asynchronous queue but these tasks can be skipped if a certain time has elapsed. For example, after a user signs out, you want to clear a certain cache, however if the task is not executed within a specific time range, you would like to skip it because the cache has its own internal expiration configured. A timed queue executor allows you to do that.

So, how do you implement this?

First thing to do is to wrap your tasks with two new classes: CallableWithTTL and RunnableWithTTL. These two classes will filter out tasks after a certain interval, thereby preventing tasks from being executed after the interval has passed.

public class CallableWithTTL<T> implements Callable<T> {
  private final LocalDateTime whenAdded = LocalDateTime.now();
  private final Callable<T> task;
  private final Duration defaultTTL;  
  
  @Override
  public T call() throws Exception {
    long timePassed = Duration.between(whenAdded, 
LocalDateTime.now()).toMillis();
    if (defaultTTL.toMillis() < timePassed) {
      LOG.warn("Task's TTL exceeded");
      return null;
    }
    return task.call();
  }
}

public class RunnableWithTTL implements Runnable {
  private final LocalDateTime whenAdded = LocalDateTime.now();
  private final Runnable runnable;
  private final Duration defaultTTL;  
  @Override
  public void run() {
    long timePassed = Duration.between(whenAdded,
 LocalDateTime.now()).toMillis();
    if (defaultTTL.toMillis() < timePassed) {
      LOG.warn("Task's TTL exceeded");
    } else {
      runnable.run();
    }
  }
}

Then, we proceed to use these new callable and runnable classes in our custom timed queue executor. This executor class is basically another ThreadPoolTaskExecutor class. The only difference is tasks are wrapped with our custom callable and runnable classes.

public class TimedQueueExecutor extends ThreadPoolTaskExecutor {

  private final Duration ttl;

  @Override
  public <T> Future<T> submit(Callable<T> task) {
    return super.submit(new CallableWithTTL(task, ttl));
  }

  @Override
  public <T> ListenableFuture<T> submitListenable(Callable<T> task)          
  {
    return super.submitListenable(new CallableWithTTL(task, ttl));
  }

  @Override
  public void execute(Runnable task) {
    super.execute(new RunnableWithTTL(task, ttl));
  }

All that is remaining is to use our new executor like this

 ThreadPoolTaskExecutor executor = new 
TimedQueueExecutor(Duration.ofSeconds(30));

where the input to the constructor is the time to live duration for each task.