miércoles, 22 de noviembre de 2017

Playing with Redis Geo structures

One of the things I've never used in Redis were the commands provided to store and access geolocated data.   From version 3.2 Redis includes 6 new very simple commands that can be used to store tags associated with coordinates and calculate distances between those tags or find the tags around some specific coordinates.

Let's try to build a simple prototype to see how it works.   Imagine that you have a service where users can rate anything (people, places, restaurants...)* and at some point you want to show in the UI of your app what things other people around you are rating right now. 

Almost every database right now has support to store this type of geographically located information but for use cases like this one where you want very fast access to ephemeral information an in-memory database can be a very good choice.

Storing data

The GEOADD command in Redis is the one you have to insert a new tag asociated with a specific position (latitude, longitude).

The structure supported in Redis for geolocated data has an insertion and query time that is O(log(N)) complexity where N is the number of items in the set, so probably you don't want to have all the data in the same set but partition it by country or some other grouping that makes sense for your use case.   In our example we could try partitioning it per city.

So everytime somebody rates something identified by a tag we will do this insert in redis:
GEOADD city latitude longitude tag
For example:
GEOADD sanfrancisco -122 37 goldengate
Redis stores this geographical information internally in a Sorted Set, so we can use any of the sorted set commands to manipulate or retrieve the list of items stored:
ZRANGE sanfrancisco 0 -1
1) "goldengate"

Retrieving data

There are two commands that you can use to make geographical queries on the stored data depending on your use case:


For our use case, everytime somebody opens the app we will retrieve all the tags around his position sorted by distance.

GEORADIUS sanfrancisco -122.2 37.1 5 km
1) "goldengate"

How does it work internally

As we mentioned before everything is stored inside Redis in the existing Sorted Set structures.

The way those zsets are leveraged are by using a score based on the latitude and longitude.  Basically by generating the zset scores interleaving the bits of the latitude and longitude of each entry you can later make queries to retrieve all the tags in a specific geographical square as a range of those scores.

That way with 9 ranges you can get all the areas around a specific point.  And those ranges can be of any size to be able to make queries using different radius just by trimming bits at the end of the score.

This technique is called geohashing and makes this geo commands very easy to implement on top of sorted sets.

Hope this is useful for other people implementing similar services, the truth is I never stop being amazed by Redis...

* Disclaimer: I built that service with some friends, you can see it in http://www.pleason.com/

lunes, 8 de mayo de 2017

How to (not) reuse code between Android and iOS

Most of the mobile applications we build these days have to work in two different platforms (Android and iOS).  Each of these platforms has its own frameworks, tools and programming languages so usually you end up building two completely separated applications and many times even built by separate teams.

[Note: If you are using some cross-platform development environment like react-native or Xamarin or building a Web/Hybrid app you are "lucky" and this post doesn't apply to you :)]

Unless you are working in a very simple app at some point you will realize that there are some parts of the application that you are implementing twice because you need it in both platforms (for example some business logic or the code to make requests to the HTTP server APIs).

Based on the capabilities of Android and iOS you have basically two options:
Option 1: Implement everything twice using the official language and libraries of each platform (f.e. implement the access to HTTP APIs using Swift and URLSession in the iOS app and using Java and Volley in the Android app)
Option 2: Implement the reusable code in C++ and compile it in the iOS app (creating a Objective C++ wrapper) and use it in the Android app (creating a JNI wrapper).

These are some possible advantages of Option 1:
  • Code is usually easier to read and maintain when written in modern languages (for example Swift vs C++).
  • Native integration: When using an Android library to make HTTP requests it will be probably integrated with the system proxy configuration and validates the SSL certificates with the system CAs by default.
  • No plumbing/boring code to write to provide access to the C++ library from the application (for example with JNI).  This can be partially mitigated using frameworks like SWIG to autogenerate the wrappers but it is still boring and usually problematic.
  • Simpler to debug because there is a single layer instead of having to make calls accross layers with different technologies(for example with JNI).
  • Build process faster and simpler because of less libraries/tools (for example no ndk required)
These are some possible advantages of Option 2:
  • No duplicated code to develop and maintain.
  • Avoid inconsistencies in naming, algorithms, protocols implementation because it is implemented in a single place.
  • Performance can be better.  Almost this is not an issue in most of the cases.
As we can see there are important pros and cons of both options so let's try another approach....  Let's check what are other popular mobile libraries doing?

I put some of those libraries in a diagram across two axis: Y for size/complexity of the library and X for number of platforms to support.  Other relevant variable could be how relevant is the performance optimisation but I don't want to make a 3D diagram :)  

In blue libraries using Option 1 and In green libraries using Option 2
[Apology: I picked some popular libraries I have used in the past and the lines of code and number of platforms is just an estimation, I didn't really count them]

As we can see most of the popular libraries are using Option 1 reimplementing the library twice, once for Android and once for iOS.  On the other side some big libraries related to real time communications or databases are using Option 2 implementing the core in C++ and exposing it with wrappers to Java and Objective-C applications.

Conclusion

What is the right solution probably depends on the type of project and the team building it but in my opinion in many (or most) of the cases it is less effort to develop and maintain 2 simple implementations than writing and maintaining a single more complex implementation plus the wrappers to different platforms.   In addition you can (should) mitigate the issues of Option 1 making use of tools to autogenerate code when possible, for example using protocol buffers/grpc for the client-server communication or swagger to generate clients for REST APIs.

I'm very interested on knowing your opinion on this topic, What do you think?   What are you doing right now in your projects?

miércoles, 19 de abril de 2017

Multiplatform Travis Projects (Android, iOS, Linux in the same build)

Using travis to build and test your code is usually a piece of cake and highly recommended but last week I tried to use travis for a non so conventional project and it ended up being more challenging than expected.

The project was a C library with Java and Swift wrappers and my goal was to generate Android, iOS and Linux versions of that library using Travis.   The main problem with my plan was that you have to define the "language" of project in your travis.yaml file and in my case... should it be android, objective-c or cpp project?

It would be great if travis would support multilanguage projects [1] or multiple yaml files per project [2] but apparently none of that is going to happen in the short term.

Linux
I decided to build the Linux part using docker to make sure I can use the same environment locally, in travis and in production.

iOS
Given the fact that the only way to build an iOS project is using OSX images and that there is no docker support in travis for OSX I had to use the multiple operating systems capabilities in travis [3].

Android
This ended up being the most challenging part.  Android projects require a lot of packages (tools, sdks, ndks, gradle...) so I decided to use docker also for this to make sure I had the same environment locally and in travis.    There were some docker images for this and I took many ideas form them, but I decided to generate my own [4].

To not have a too crazy travis.yaml file I put all the steps to install prerequirements and to launch the build process in shell scripts (2 scripts per platform).  That simplifies the travis configuration and also let me reuse the steps if I want to build locally or in jenkins eventually.   My project folder looks like this:

    /scripts/ios
       before_install.sh
       build.sh
    /scripts/android
       before_install.sh
       build.sh
    /scripts/linux
       before_install.sh
       build.sh

The most interesting scripts (if any) are the android and ios ones.

    #!/bin/bash
    echo "no additional requirements needed"

    #!/bin/bash
    xcodebuild build -workspace ./project.xcworkspace -scheme 'MyLibrary' -destination 'platform=iOS Simulator,name=iPhone 6,OS=10.3'

    #!/bin/bash
    docker pull ggarber/android-dev

    #!/bin/bash
    docker run --rm -it --volume=$(pwd):/opt/workspace --workdir=/opt/workspace/samples/android ggarber/android-dev gradle build


With that structure and those scripts the resulting travis.yaml file is very simple:

language: cpp

sudo: required
dist: xenial

os:
  - linux
  - osx

osx_image: xcode8.3

services:
  - docker

before_install:
  - if [[ "$TRAVIS_OS_NAME" != "osx" ]]; then ./scripts/linux/before_install.sh  ; fi
  - if [[ "$TRAVIS_OS_NAME" != "osx" ]]; then ./scripts/android/before_install.sh ; fi
  - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then ./scripts/ios/before_install.sh     ; fi

script:
  - if [[ "$TRAVIS_OS_NAME" != "osx" ]]; then ./scripts/linux/script.sh  ; fi
  - if [[ "$TRAVIS_OS_NAME" != "osx" ]]; then ./scripts/android/script.sh ; fi
  - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then ./scripts/ios/script.sh     ; fi

This is working fine although the build process is a little bit slow so these are some ideas to explore to try to improve it in the future:
  • Linux and Android builds could run in parallel.
  • Android docker images are very big (not only mine but all the ones I found).   According to docker hub it is 2GB compressed image.  Probably there are ways to strip this down.
  • I'm not caching the android packages being downloaded during the build process inside the docker container.

[1] https://github.com/travis-ci/travis-ci/issues/4090
[2] https://github.com/travis-ci/travis-ci/issues/3540
[3] https://docs.travis-ci.com/user/multi-os/
[4] https://github.com/ggarber/docker-android-dev

lunes, 6 de febrero de 2017

Using Kafka as the backbone for your microservices architecture

Disclaimer: I only use the word microservices here to get your attention.  Otherwise I would say your platform, your infrastructure or your services.

In many cases when your application and/or your team start growing the only way to maintain a fast development and deployment pace is to split the application and teams in different smaller units.   In case of teams/people that creates some interesting and not necessarily easier to solve challenges but this post is focused on the problems and complexity created in the software/architecture part.

When you split your solution in many components there are at least two problems to solve:
  • How to pass the information from one component to another (f.e. how do you notify all the sub-components when a user signs up so that you send him notifications, start billing him, generate recommendations...)
  • How to maintain the consistency of all the partially overlapped data stored in the different components (f.e. how do you remove all the user data from all the sub-components when the user decides to drop out from your service)

Inter component communication

At a very high level there are two communication models that are needed in most of the architectures:
  •  Synchronous request/response communications.  This has his own challenges and I recommend to use gRPC and some best practices around load balancing, service discovery, circuit breakers.... (find here my slides for TEFCON 2016) but it is usually a well understood model.
  • Asynchronous event based communications where a component generates an event and one or many components receive it and implement some logic in response to that event.
The elegant way to solve this second requirement is having in the middle a bus or a queue (depending on the reliability guarantees required for the use case) where producers send events and consumers can read those events from it.    There are many solutions to implement this pattern but when you have to handle heterogeneous consumers (that consume events at different rates or with different guarantees) or you have a massive amount of events or consumers the solution is not so obvious.

Data consistency

The biggest problem to solve in pure microservices architectures is probably how to ensure data consistency.   Once you split your application in different modules with data that is not completely independent (at the very least they all have the information about the same users) you have to figure out how to maintain that information in sync.

Obviously you have to try to maintain these dependencies and duplicated data as small as possible but usually at least you have to solve the problem of having the same users created in all of them.

To solve it you need a way to sync the data changes between different components that could be duplicated and need to be updated in other components.  So basically you need a way to replicate data that ensures the eventual consistency of it.

The Unified Log solution

If you look at those two problems they can be reduced to a single one: To have a real-time and reliable unified log that you can use to distribute events among different components with different needs and capabilities.   That's exactly the problem that LinkedIn had and what they built Kafka to solve.   The post "The Log: What every software engineer should know about real-time data's unifying abstraction" it is a very very recommended reading.

Kafka decouples the producers from the consumers including the ability to have slow consumers without affecting rest of the consumers.  Kafka does that and at the same time supports very high rates of events (it is common to have hundreds of thousands per second) with very low latencies (<20 msecs easily).  All these features while still being a very simple solution and providing some advanced features like organizing events in topics, preserving ordering of the events or handling consumer groups.

Those Kafka characteristics make it suitable to support most the inter-component communication use cases including events distribution, logs processing and data replication/synchronization.  All with a single simple solution by modeling all these communications as an infinite list of ordered events accessible for multiple consumers using a centralized unified log.

This post was about Kafka but all/most-of-it is equally applicable to the Amazon clone Kinesis. 

You can follow me in Twitter if you are interested in Software and Real Time Communications.

domingo, 15 de enero de 2017

Starting to love gRPC for interprocess communication (1/2)

In the context of a discussion around programming languages and static typing a colleague said that when you get older you stop caring about fancy technologies and you realize that is way better to just use safe and well probed solutions.  

I'm kind of tired of having been using loosely defined JSON-HTTP interfaces for many years and when I discovered gRPC last year it looked exactly what I was looking for.  I would love to start using it in production as soon as possible so I decided to play with it for a while first and explain how it went.

I will split my comments about gRPC in two posts. This first one about what is gRPC and what advantages provide and the next one on how to use it in our applications.

gRPC embraces the RPC paradigm where the APIs are defined as actions receiving some arguments and replying with a response.  Initially it feels like going 10y back when we started to use SOAP and similar technologies but we have to admit that is much simpler to map those primitives to our client and server code (for example no url path mapping) and it is more strict and explicit on what can and cannot be done for each operation and that usually makes the system more robust.

In gRPC you define your interfaces (methods, arguments and results) in an IDL using the protocol buffers format.   This definition is used to generate the server and client code automatically.   The serialization of the calls is done using the binary protobuf format too.  This makes the communication efficient and the protocol extensible being able to use all the features available in protobuf (for example composition or enum types).

Two of the advantages of this approach are automatic code generation and schema validation.  That can also be done in the "traditional" REST interfaces, but it is more tedious, less efficient and in my experience much easier to make mistakes when you add new features or refactor the code.

The communication in gRPC is based on HTTP2 transport.  This provides all the advantages of the new HTTP version (multiplexing, streaming, compression) while at the same time allows you to keep using existing HTTP infrastructure (nginx or other load balancers for example).

Another special feature of gRPC is the streaming support that is very convenient for some APIs these days.    With gRPC you are able to send a (potentially infinite) sequence of arguments to the server and receive a sequence of results from it.   That is very useful to implement applications more responsive where data can be processed and displayed even if part of it is still not ready.    It is also very useful for APIs based on notifications like in case of a chat application for example.

When compared with other IPC frameworks like Finagle (disclaimer, i'm a fan of it) gRPC is still missing important features client side load balancing (although it is wip) and some other goodies like circuit breakers, retries or service discovery.   In the mean time people is implementing those features on top of the framework.

The other missing piece is browsers support.  Even if there is support for many languages including Javascript, the browsers limitations make it not possible to implement a gRPC compatible web client nowadays.   The community is working on an extension of the protocol to support browsers and in the mean time the only solution seem to be the grpc-gateway proxy that generates a JSON-HTTP to gRPC proxy based on the IDL of the service with some extra annotations.