miércoles, 19 de abril de 2017

Multiplatform Travis Projects (Android, iOS, Linux in the same build)

Using travis to build and test your code is usually a piece of cake and highly recommended but last week I tried to use travis for a non so conventional project and it ended up being more challenging than expected.

The project was a C library with Java and Swift wrappers and my goal was to generate Android, iOS and Linux versions of that library using Travis.   The main problem with my plan was that you have to define the "language" of project in your travis.yaml file and in my case... should it be android, objective-c or cpp project?

It would be great if travis would support multilanguage projects [1] or multiple yaml files per project [2] but apparently none of that is going to happen in the short term.

Linux
I decided to build the Linux part using docker to make sure I can use the same environment locally, in travis and in production.

iOS
Given the fact that the only way to build an iOS project is using OSX images and that there is no docker support in travis for OSX I had to use the multiple operating systems capabilities in travis [3].

Android
This ended up being the most challenging part.  Android projects require a lot of packages (tools, sdks, ndks, gradle...) so I decided to use docker also for this to make sure I had the same environment locally and in travis.    There were some docker images for this and I took many ideas form them, but I decided to generate my own [4].

To not have a too crazy travis.yaml file I put all the steps to install prerequirements and to launch the build process in shell scripts (2 scripts per platform).  That simplifies the travis configuration and also let me reuse the steps if I want to build locally or in jenkins eventually.   My project folder looks like this:

    /scripts/ios
       before_install.sh
       build.sh
    /scripts/android
       before_install.sh
       build.sh
    /scripts/linux
       before_install.sh
       build.sh

The most interesting scripts (if any) are the android and ios ones.

    #!/bin/bash
    echo "no additional requirements needed"

    #!/bin/bash
    xcodebuild build -workspace ./project.xcworkspace -scheme 'MyLibrary' -destination 'platform=iOS Simulator,name=iPhone 6,OS=10.3'

    #!/bin/bash
    docker pull ggarber/android-dev

    #!/bin/bash
    docker run --rm -it --volume=$(pwd):/opt/workspace --workdir=/opt/workspace/samples/android ggarber/android-dev gradle build


With that structure and those scripts the resulting travis.yaml file is very simple:

language: cpp

sudo: required
dist: xenial

os:
  - linux
  - osx

osx_image: xcode8.3

services:
  - docker

before_install:
  - if [[ "$TRAVIS_OS_NAME" != "osx" ]]; then ./scripts/linux/before_install.sh  ; fi
  - if [[ "$TRAVIS_OS_NAME" != "osx" ]]; then ./scripts/android/before_install.sh ; fi
  - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then ./scripts/ios/before_install.sh     ; fi

script:
  - if [[ "$TRAVIS_OS_NAME" != "osx" ]]; then ./scripts/linux/script.sh  ; fi
  - if [[ "$TRAVIS_OS_NAME" != "osx" ]]; then ./scripts/android/script.sh ; fi
  - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then ./scripts/ios/script.sh     ; fi

This is working fine although the build process is a little bit slow so these are some ideas to explore to try to improve it in the future:
  • Linux and Android builds could run in parallel.
  • Android docker images are very big (not only mine but all the ones I found).   According to docker hub it is 2GB compressed image.  Probably there are ways to strip this down.
  • I'm not caching the android packages being downloaded during the build process inside the docker container.

[1] https://github.com/travis-ci/travis-ci/issues/4090
[2] https://github.com/travis-ci/travis-ci/issues/3540
[3] https://docs.travis-ci.com/user/multi-os/
[4] https://github.com/ggarber/docker-android-dev

lunes, 6 de febrero de 2017

Using Kafka as the backbone for your microservices architecture

Disclaimer: I only use the word microservices here to get your attention.  Otherwise I would say your platform, your infrastructure or your services.

In many cases when your application and/or your team start growing the only way to maintain a fast development and deployment pace is to split the application and teams in different smaller units.   In case of teams/people that creates some interesting and not necessarily easier to solve challenges but this post is focused on the problems and complexity created in the software/architecture part.

When you split your solution in many components there are at least two problems to solve:
  • How to pass the information from one component to another (f.e. how do you notify all the sub-components when a user signs up so that you send him notifications, start billing him, generate recommendations...)
  • How to maintain the consistency of all the partially overlapped data stored in the different components (f.e. how do you remove all the user data from all the sub-components when the user decides to drop out from your service)

Inter component communication

At a very high level there are two communication models that are needed in most of the architectures:
  •  Synchronous request/response communications.  This has his own challenges and I recommend to use gRPC and some best practices around load balancing, service discovery, circuit breakers.... (find here my slides for TEFCON 2016) but it is usually a well understood model.
  • Asynchronous event based communications where a component generates an event and one or many components receive it and implement some logic in response to that event.
The elegant way to solve this second requirement is having in the middle a bus or a queue (depending on the reliability guarantees required for the use case) where producers send events and consumers can read those events from it.    There are many solutions to implement this pattern but when you have to handle heterogeneous consumers (that consume events at different rates or with different guarantees) or you have a massive amount of events or consumers the solution is not so obvious.

Data consistency

The biggest problem to solve in pure microservices architectures is probably how to ensure data consistency.   Once you split your application in different modules with data that is not completely independent (at the very least they all have the information about the same users) you have to figure out how to maintain that information in sync.

Obviously you have to try to maintain these dependencies and duplicated data as small as possible but usually at least you have to solve the problem of having the same users created in all of them.

To solve it you need a way to sync the data changes between different components that could be duplicated and need to be updated in other components.  So basically you need a way to replicate data that ensures the eventual consistency of it.

The Unified Log solution

If you look at those two problems they can be reduced to a single one: To have a real-time and reliable unified log that you can use to distribute events among different components with different needs and capabilities.   That's exactly the problem that LinkedIn had and what they built Kafka to solve.   The post "The Log: What every software engineer should know about real-time data's unifying abstraction" it is a very very recommended reading.

Kafka decouples the producers from the consumers including the ability to have slow consumers without affecting rest of the consumers.  Kafka does that and at the same time supports very high rates of events (it is common to have hundreds of thousands per second) with very low latencies (<20 msecs easily).  All these features while still being a very simple solution and providing some advanced features like organizing events in topics, preserving ordering of the events or handling consumer groups.

Those Kafka characteristics make it suitable to support most the inter-component communication use cases including events distribution, logs processing and data replication/synchronization.  All with a single simple solution by modeling all these communications as an infinite list of ordered events accessible for multiple consumers using a centralized unified log.

This post was about Kafka but all/most-of-it is equally applicable to the Amazon clone Kinesis. 

You can follow me in Twitter if you are interested in Software and Real Time Communications.

domingo, 15 de enero de 2017

Starting to love gRPC for interprocess communication (1/2)

In the context of a discussion around programming languages and static typing a colleague said that when you get older you stop caring about fancy technologies and you realize that is way better to just use safe and well probed solutions.  

I'm kind of tired of having been using loosely defined JSON-HTTP interfaces for many years and when I discovered gRPC last year it looked exactly what I was looking for.  I would love to start using it in production as soon as possible so I decided to play with it for a while first and explain how it went.

I will split my comments about gRPC in two posts. This first one about what is gRPC and what advantages provide and the next one on how to use it in our applications.

gRPC embraces the RPC paradigm where the APIs are defined as actions receiving some arguments and replying with a response.  Initially it feels like going 10y back when we started to use SOAP and similar technologies but we have to admit that is much simpler to map those primitives to our client and server code (for example no url path mapping) and it is more strict and explicit on what can and cannot be done for each operation and that usually makes the system more robust.

In gRPC you define your interfaces (methods, arguments and results) in an IDL using the protocol buffers format.   This definition is used to generate the server and client code automatically.   The serialization of the calls is done using the binary protobuf format too.  This makes the communication efficient and the protocol extensible being able to use all the features available in protobuf (for example composition or enum types).

Two of the advantages of this approach are automatic code generation and schema validation.  That can also be done in the "traditional" REST interfaces, but it is more tedious, less efficient and in my experience much easier to make mistakes when you add new features or refactor the code.

The communication in gRPC is based on HTTP2 transport.  This provides all the advantages of the new HTTP version (multiplexing, streaming, compression) while at the same time allows you to keep using existing HTTP infrastructure (nginx or other load balancers for example).

Another special feature of gRPC is the streaming support that is very convenient for some APIs these days.    With gRPC you are able to send a (potentially infinite) sequence of arguments to the server and receive a sequence of results from it.   That is very useful to implement applications more responsive where data can be processed and displayed even if part of it is still not ready.    It is also very useful for APIs based on notifications like in case of a chat application for example.

When compared with other IPC frameworks like Finagle (disclaimer, i'm a fan of it) gRPC is still missing important features client side load balancing (although it is wip) and some other goodies like circuit breakers, retries or service discovery.   In the mean time people is implementing those features on top of the framework.

The other missing piece is browsers support.  Even if there is support for many languages including Javascript, the browsers limitations make it not possible to implement a gRPC compatible web client nowadays.   The community is working on an extension of the protocol to support browsers and in the mean time the only solution seem to be the grpc-gateway proxy that generates a JSON-HTTP to gRPC proxy based on the IDL of the service with some extra annotations.