Monday, April 12, 2010

More efficient client-side HTTP handling with the new Async HTTP client @GitHub

1. Yet another HTTP-client?

Ok now: I am aware of the fact there are quite a few contestant for the "best Java HTTP client"; starting with the well-rounded and respected Apache HTTP Client (esp. version 4.0). But there is now a very promising, up and coming young challenger, aptly named Async HTTP Client ("Ning async http client", considering its corporate sponsor at Github) written by a very competent guy whose past work includes things like Glassfish, and especially its Atmosphere module (async http goodness; Comet, WebSocket etc).

Given it has the single most important thing an open source project needs (at least one technically strong developer who knows the domain well), I have high hopes for this project, and recommend you to keep it in mind if you need an HTTP client for high-volume server-side systems (why server-side? because that's where you typically need much more concurrent client-side HTTP access, when talking to other webb services).

2. Asynchronous? So... ?

So why does it actually mind whether you use blocking or non-blocking client? Well, the "async" (aka non-blocking) part is obviously important in general for highly concurrent use cases, where JVM thread scaling is not very good beyond hundreds of threads.
But more interestingly, it also really starts to matter when you have "branching" with your service: that is, for each call your service handles, it needs to make multiple calls to other services. With blocking http clients you either have to spin new threads (complicated, and somewhat costly); or do requests sequentially. Former can achieve low(er) latency; latter is simpler and more efficient. But with asynchronous calls, you can actually fire all (or some) requests concurrently, as early as possible; do some processing after this, and when necessary, check for request results (via Futures). While not as trivially easy as sequential calls, this can be almost as good, and with much improved latency.
High branching factor is what powers many high-volume web sites: for example, high-traffic web pages such as Amazon.com's pages are composed from multiple separately computed blocks, many of which are built based on multiple independent calls to backend services. This can not be done with tolerable latency by using sequential web service calls.

Beyond non-blocking part, it is also likely that over time blocking convenience facade will be developed as well, so it is not unreasonable to expect this to develop into more general-purpose solution for HTTP access (at least that is my personal opinion/wish).

Anyway: cool beans; we'll see how this project advances. So far progress has been remarkably rapid -- in fact, version 1.0 seems to be in sight; as tentative feature list has been discussed on the user list. More on 1.0 when it is out in the wild.

3. Disclosure

In spirit of full disclosure, I should mention that Jean-Francois (the author) is actually my current co-worker -- but at least I know what I am talking about when praising him. :-)

blog comments powered by Disqus

Sponsored By


Related Blogs

(by Author (topics))

Powered By

About me

  • I am known as Cowtowncoder
  • Contact me at@yahoo.com
Check my profile to learn more.