Notes on learning SSL,OpenSSL,and advanced SSL topics


It’s surprisingly difficult to search for good resources online to learn about the functioning and internals of SSL, and related areas.

My objective in writing this article are the following:

(a) Organize what I already know and understand for quicker reference later on.

(b) Organize access to good resources on the topic that I have come across, to save someone else the hassle.

Just as it is rather rude to directly try to peer into the contents of someone’s stomach before trying to befriend and exchange pleasantries first, in this article, I go over some functional aspects of SSL, before looking at the internals and some advanced topics. ¬†ūüôā


I would recommend reading a StackExchange article on an introduction to SSL/TLS¬†¬†first. ¬† ¬†And then, after the understanding of overall flow is clear, head over to Section 7.3 of TLS RFC , and spend some solid quality time there.After that,a nice article about the first few seconds of a HTTPS connection.¬†This article with wire-shark data-capture , walks through the stages of Client Hello, Server Hello,Certificate validation, RSA ¬†prime factorization logic, certificate signing chain of trust, generation of pre-master secret on the client, sharing it with the server, generation of ¬†the symmetric master key and it’s exchange with the client… and finally encrypting the traffic with an XOR-based algorithm like RC4. ¬†This article includes several references to code of ssl implementation, and goes more in-depth about what happens in different stages,like the merits hash functions used and their security .( eg ¬†MD5 vs HMAC MD5).


The man page about openssl command line options was definitely daunting for me to start directly figuring out what to do.So,I would recommend using some of these resources first.  More common scenarios like generation of certificates, verification of certificates,converting between certificate formats, debugging connections by a test ssl client or server would have been encountered by folks before.   Some of the pertinent stuff which should come handy include:

(a)  Categorized command use-cases

(b) Frequently used commands 

(c) Debugging SSL Communication

(d) Certificate Signing Request, Self signed Certificates and more

(e) A consolidated list 


The documentation of OpenSSL isn’t the best, and it has had a lot of criticism going it’s way. And most people do not need to write code directly invoking openssl library function calls. However,¬†HP’s tutorial on OpenSSL programming is a pretty good starting point. This presentation with example code by¬†Shteryana Shopova is good resource as well. This ¬†presentation is more recent (2014), and the author even talks about HeartBleed citing actual code-snippet etc.

I shall write an article with code in another article.

(5)  TOOLS OF NOTE :  

  Wireshark  РA rather obvious one.

 Charles Web Debugging Proxy : This is a pretty popular and powerful tool. For instance, someone using could uses Charles for debugging something an android app which makes HTTPS requests.


(6.a)     SSL Termination  :  

To offset the computation intensive portion of ¬†SSL Handshake, SSL termination is sometimes adopted. ¬†So, for instance,when HTTPS connections hit the “reverse proxy”, ¬†or the “load balancer”, ¬†the reverse proxy can terminate SSL connection, and send HTTP request to the servers sitting behind it. ¬†Here’s a nice resource on the same which investigates and compares tools that do SSL Termination, and also speaks about the performance impact of hash functions.

(6.b)     SSL  acceleration

The wiki article even points to few companies who do SSL acceleration.This is usually done using  a dedicated chip or GPU. On the opposite side of the spectrum,someone from Google in 2010 published an article  saying that SSL is not computationally expensive anymore. Important to note that this point was for 1024 bit keys, and not 2048 bit keys.

(6.c)      SSL Session Reuse 

In this article about SSL session reuse, the author presents convincing arguments to chose session reuse as opposed to SSL termination. Read the comments section as well, for the discussions were good.

(6.d)    SSL VPN ( Virtual Private Network) :

This article gives a pretty decent introduction about SSL VPN’s.


I hope that in the truest sense of “hub and spoke” architecture,the reader finds this article to be a decent hub to learn and understand about SSL and related topics better.


On Lock-Free Programming and Concurrency


This article is organized by first outlining relevant terminology,and then exploring the definition and ¬†examples of Lock-Free DS/Programming.Subsequently,I touch upon current code/libraries, recent research literature on the same, and conclude. I’ve also classified resources that I read, so that it might be helpful/ time-saving to someone seeking to explore further. ¬†I am just learning this out of my own initiative, since it seemed a very interesting topic, and more importantly, I kept reading about experts talking and writing “You shouldn’t be doing this” . “It’s the last thing you should be trying to do” . ¬†Now,they forget ¬†basic toddler-psychology. Once anyone utters those words, it’s impossible to just ignore and move on. You are obligated to dig, get hurt, and get your hands dirty. ¬†: D : D ¬†ūüėÄ


There is quite a bit of confusing terminology with respect to Lock-Free Data Structures floating in the internet. It probably prompted this blog article  by RethinkDB, an open source distributed database.  They differentiate between wait-free and lock-free,and cite examples.

This blog article¬†in 2013 on a blog dedicated only on concurrency tried to put an end to the nomenclature confusion. They state “wait free > lock free > blocking”, with respect to guarantees,but since wait-free and lock-free are not that popular, the definition and it’s interpretation is still ambiguous.

In this hacker news thread  in 2014, we can see some more discussion on the confusion of terminology.

The book ,Art of multi-processor programming, is generally regarded as the most definitive resource to gain deeper understanding.

¬† (a)¬†Concurrency,Parallelism,Multithreading: ¬†¬†¬†Rob Pike gave a really nice talk on this topic.Here’s blog post/video of his talk ¬†¬† ¬†I strongly recommend seeing the slides to first understand the difference between the two .The gopher-cart slides in particular are very illustrative of the concepts. .¬† Concurrency does imply parallelism It can be present in a single-thread too .Take the example of kqueue or ¬†select /poll event-based TCP Servers. On the other hand, Multi-threaded systems achieve concurrency by parallelism.

¬†Let’s see more on concurrency with respect to ¬†multi-threaded systems.

Here are the definitions referenced from the book, and  in the lock-free documentation on .  Tony Van Eerd  gave a  really nice talk on Lock Free programming at BoostCon 2010 Here are the  slides of the same  .

(b)Lock Free:    Concurrent operations are guaranteed to be finished in a finite number of steps. In theory it is  possible that some operations never make any progress, it is very unlikely to happen in practical applications.  

(c)Wait Free:    Every concurrent operation is guaranteed to be finished in a finite number of steps. It is therefore possible to give worst-case guarantees for the number of operations.

(d)Obstruction Free:   A concurrent operation is guaranteed to be finished in a finite number of steps, unless another concurrent operation interferes

(e) Livelock:     Livelock happens when process state changes with reference to each other, with no progress.

(f) ABA problem:¬†¬† In a single line ¬†if a memory location Thread 1 is noticing has value ‘A’, and Thread2 changes it to B,does some other stuff, and sets value back ‘A’, Thread1 won’t know anything about it. In the Lock-free world, it becomes important.

(g) Priority inversion occurs whenever a high-priority thread must wait for a low-priority thread.

What is Lock-Free programming? When do they become relevant?

 Lock-Based  Concurrency:

It’s important to have a refresh how exactly Lock-Based-Concurrent Data structures are implemented, before delving into Lock-Free Data Structures. For instance, that article has implementation for concurrent lock-based linked-lists (among others) , and shows different types of locking – global and lock-coupling or hand-over-hand locking, with their associated performance trade-offs.

Intuitive look at Lock Free programming : ¬†¬† Look at the youtube tutorial over here, and notes over here the presenter at BoostCon, he says this- ¬†If you can visualize lock-size reducing from one over a big critical section,moving over to several locks ¬†smaller critical sections,ultimately leading to lock size of a CPU instruction, then it’s converging at “Lock-Freeness”. Hence we have importance of atomic CAS instruction, and processor architecture specifics of the same.

He takes the example of a global counter incremented by two threads using atomic CAS operations, referenced from his slides/talk.

word atomic_increment(word & target) {

word oldval,newval;


oldval = target; // READ

newval = oldval + 1; // MODIFY

}while ( !CAS(&target, oldval, newval) ); // WRITE or retry

return newval;


static int counter;

void thread1() {

int i =100;

while(i–) atomic_increment(&counter);


void thread2() {

int i = 100;

while(i–) atomic_increment(&counter);


Difference between Spin-Lock / CAS instruction :    

In the atomic_increment() function, the ¬†do .. while loop suspiciously is similar to a spin-lock, and looks to be bad code. ¬†However, the CAS loop is not blocking. ¬†¬†CAS loop can be intuitively thought of as an atomic ” if the target is still what it just was, set it to a new value”

When the 2 threads in parallel, try to modify the variable counter, at least one of them guaranteed to succeed.  

So, now we can present the official definition

¬†An algorithm is lock-free if at all times at least one thread is guaranteed to be making progress”¬†

Few pertinent points on the Lock-Free:

(1)¬†Lock-Free approach usually implies the usage of hardware-based ¬†atomic ¬†CAS ¬†(Compare And Swap) operations. Preshing’s introduction to the topic cover’s this part sequential consistency,atomicity, memory models, instruction reordering. ¬†

(2)  Lock-Free approach is a trade-off for throughput and mean latency for predictable latency.If the amount of contention is less, then the lock-free approach will get lesser work done compared to lock-based approach,since hardware CAS can slow done performance.  If contention is higher, more throughput and lesser latency is possible      

(3) Lock-Free approach is typically taken  not for the entire application,but only for a particular set of operations. 

Here’s a page ¬†with a nice bunch of links about Lock-Free programming


How do different languages support Lock-Free programming?

There  have been three different school of thoughts to address or solve problems involving concurrency. Discussed in this article i n greater detail.

They are

(a) Lock -free data structures

(b) Fine -grained algorithms

(c) Software Transactional Memory

Languages like Go and Erlang have built-in concurrency features like  non-blocking I/O,user-space concurrency, advanced scheduling,and easy messaging-unlike more mainstream languages like  C++,C,Java .

In C++ /Java ¬†we can find ¬†implementations of Lock-free libraries (Boost ¬†C++ ¬†or Facebook Folly Library or Dr Cliff’s Lock free hashtable in java), among


In Concurrent-Haskell, they approach to taken solve the problem is through Software Transactional Memory

In Erlang concurrency is achieved through Message passing and the Actor Model.  Check out this article dealing with the same.

In Go, the approach to concurrency is through channels. ¬†On their official page they state, ” Don’t communicate by sharing memory; share memory by communicating.Channels allow you to pass references to data structures between goroutines.”

Purely functional data structures are immutable and lock-Free, and are called Persistent Data Structures- which automatically store ¬†all previous modifications to the data structure.¬†Here’s an interesting stackoverflow thread on the same.

What are the difficulties of Lock-free programming?

Difficulty in Lock-Free programming:    

Multi-threaded applications using Lock-based-Concurrent structures give enough pain(or joy)  as it is to Engineers.  When there are serious performance issues involving locking and contention, Lock-Free approach may  prove to be beneficial.

(1) In this   stackoverflow thread , we see a discussion about when lock-free data-structures or programming performs less than lock-based. In this stackoverflow thread, we can see cases questioning into performance benefits of Lock-Free .

(2) Writing Lock-free is hard, or harder as Herb Sutter, a widely known C++ expert, has elucidated here in 2008. Among his chief observations-

(a) It’s difficult even for experts to get it right.

(b) In 2008, there wasn’t an implementation of Doubly-Linked-Lists which were lock-free. ¬†Most Likely, people get famous or ¬†publish or earn their degrees by implementing lock-free ¬†data structures.

And hey, I came across a 2014  Technical Report  on Lock-Free Binary Search Trees.

Here’s a well cited Lock-free B+ Tree¬†published in 2012- the first implementation about Lock-Free B+ Trees, from their abstract.


 What are some code-bases to read about Lock-Free Data Structures/Algorithms?

(1)Dr Cliff Click¬†¬†from Azul Systems has published papers, and released code which are well-appreciated. For instance: It’s about ¬†Lock-Free Hash-Table

(2) Dmitry Vyukov is another highly appreciated person in this space. In his site he has this : concurrent hashmap .  

Additionally, his site  has a  good treasure-trove of articles,links and code.

(3) Folly C++ Library, which implements Concurrent Queues among other things.


I’m just scratching at the surface of Lock-Free programming and ¬†the black magic of advanced concurrency. It is/was fun learning about the same. ūüôā ¬†As an additional side-effect, I am definitely more curious to learn about different languages and their approaches to concurrency- in particular functional programming languages.


Books,Slides,and in-depth articles :











[11]  Double Checked Locking paper :

 Discussion about Terminology/Nomenclature:




Discussion of related topics, different approaches to parallelism


[18] On immutability and the need for locks 




 Stackoverflow Threads:

[25] when-are-lock-free-data-structures-less-performant-than-mutual-exclusion-mutexes? 
[27] does-immutability-entirely-eliminate-the-need-for-locks-in-multi-processor-programming?

Good bunch of Links listed: 



Wikipedia articles:  

Research Work: 
 [36] Publications of Nir Shavit , one of authors of "Art of Multiprocessor programming"

Code Resources:
 [37] Dr Cliff's Lock-Free Hash Table 
 [38] Dmitry Vyukov's concurrent hashmap 
 [39]  Boost Library has Lockfree constructs 

Video Tutorial: 
 [41] Notes of the tutorial:

On Custom Memory Pools/Allocators

When I was searching online ¬†for ¬†implementation of ¬†a custom memory pool, ¬†I ¬†mainly ¬†found references [1] , [2]¬†. In this article from ¬†IBM developerworks about Java performance[3]¬†,we have an argument against ¬†Custom memory allocators . ¬†Brian Goetz, the guy who wrote the article ¬†in 2005,is one of the authors of “Concurrency in Java” , which is a highly-rated ¬†book in the Java-world.

That notwithstanding,Custom memory pool management¬†– or the ideas motivating them- are still very relevant . ūüôā

Memory allocation strategy in Folly

Sometime in 2012,Facebook open-sourced folly  (their C++ library). While I am not sure how successfully  Folly has been adopted in the wild outside Facebook, the documents, and  design considerations are sure as hell interesting, and worth reading.

For instance over here, you find a description of allocation strategy for strings of lengths  in different length range (replacement for std:: string)

And over here, you find a description of vectors.  Vectors, when they reach the current max memory, do a realloc(), and allocate double the memory. Mathematically, they show, a factor of growth of 2 is bad, and 1.5 is better.  It is information like this that is worth assimilating.

Additionally, here is  another great article in Facebook Engineering that  gives pointers to the general area of memory allocation, and memory pool management. This article is a must-read, especially with the links that referred within the article. Closely related to the  article mentioned, is the  paper detailing the implementation of jemalloc  (Facebook hired the guy who wrote  jemalloc :Quora thread here)

Memcached -Memory Allocation

A reference point for the importance of ¬†slab-based memory allocation can be seen in¬†Memcached. Memcached ¬†scaled well with libevent, and ¬†the memory allocation strategy it takes is that it does not force the release of memory of items that have been evicted from the cache. ¬†Memcached uses¬†slab-based allocation, and here’s a resource to know more about the internals of memcached¬†[8]

Here is another nice resource on memory management on Apple’s Developer site [6], which discusses strategies for allocation of large objects, in shared memory, smaller objects ¬†etc.

   Custom Memory pools- Different Aspects 

1)Why custom memory pool management?

  (a)   A trade-off between run-time performance, and start-up overhead, and need to deal with memory fragmentation.

(b) Avoiding External Memory fragmentation

(c) For several long-running processes, the bottleneck in the code fast-path ( that 80-20 rule : 80 % time is spent in 20 % code), might be doing frequent allocations/freeing. Also, systems where memory is a scarce resource (Embedded Systems)

(d) Freeing up complex nested structures

(e) Reducing cache misses Chiefly with respect to proximity of allocation and accesses.

2) Can there be a  formal/mathematical perspective of  custom memory pool management? 

Here’s an attempt at formalization.

¬†¬†Let ¬†X1,X2,X3 … be in non-decreasing order. Let M ¬†represent the total memory managed by the custom memory pool.

               Consider the set of linear equations :

  1.                        C1 *X1 = A

  2.                         C2*X2 =B

  3.                        C3*X3 =C

  4. ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬†Wx1= ¬†Wx1+(x1-y) ¬† –> ¬†If ¬†‘y’ memory is taken from x1 pool)

  5. ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† Wx2=Wx2+ (x2-y) ¬†–> ¬†If ¬†‘y’ memory is taken from the pool x2 , etc.

  6. ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† W= Wx1+Wx2+Wx3 …..

  7. ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† A+B+C ……L =M ¬† ¬†( M being The Total memory managed by the allocator)

¬† ¬† ¬† ¬†In each of those ¬† equations, ¬†C1*X1 ¬†represents ¬†‘C1’ ¬†represents the number of elements in the linked list having ¬†‘X1’ as the maximum memory that could be allocated. ¬† ¬†Whenever a request for size ‘Y’ comes in, we check the ¬†bucket in increasing size, if there is a request that could be accommodated in it’s bucket list. ¬†And ¬†When memory is released, it goes back to the corresponding bucket. ¬†Let the set of all these equations be called as ‘S’ . ¬†At any given time we have the system going from { S- ¬†Y1} ¬†or ¬†{ S+Y2} ¬†{ corresponding to application consuming ‘Y1’ memory from the pool, and releasing ‘Y2’ memory to the pool } . ¬†Correspondingly, the internal fragmentation resulted is reflected in the ¬†Wx1,Wx2 equations.

We can see that in the ideal steady-state, the fragmentation incurred should be minimum.

 3) What other information might be  gained by having custom memory pool management? 

Whenever an allocation is done, we can add trailer information to the chunk of memory ( meta-data, application-dependent) ¬†having information such as ¬†an ‘identifier’ who allocated it, the time ¬†it was allocated.

A related relevant question is how to determine if the custom allocator is actually doing its job. Here’s a stackoverflow thread on the same, leading to sites about benchmarks and performance analysis.

4) What are some real-world examples about  custom memory management? 

Here’s a blog article [7] , where even the comments contained good information about ¬†custom-memory-pool management.

(a) Memory management in Postgres ¬†– Uses “memory contexts” to easily free memory , instead of freeing each individual chunk in the allotted pool. ¬† Refer ¬†/src/backend/utils/mmgr/README ¬†in Postgresql Source code available here

(b)  In general, if allocation and deallocation have to handle objects of widely varying sizes, then custom-memory-pool can be beneficial.

(c)Here’s a write-up and with an example, explaining ¬†the use of custom memory pools in Game Engines

(d) In packet-processing and high-traffic network applications, the strategy of memory pools and memory management can be very important. For instance, about Web Proxies- Here ‘s a forum discussion of the strategy adopted by Ngnix¬†¬†. And here is a relevant article on wiki about Region based memory management¬†.

(e) Here’s the github repository on the hoard memory allocator¬†. The documentation here points to a paper which has been cited 300+ times – an indicator that reading the same is essential for deeper knowledge in the area. ūüôā ¬†ūüôā


 The Unfinished Bottom Line

It’s 2014, and we still find a lot of current literature discussing memory usage patterns, object size allocation etc. ¬†Custom memory pools might not be more everybody,but for a serious programmer, it is something worth investing some time, and understanding the same.







[6]Apple Developer Documentation 



Books I read in 2013

After many,many years- perhaps almost half a decade- I accidentally rebooted my interest in reading  in 2013- more precisely,almost towards the last few months of 2013. Hopefully I get to read more in the coming years!

These are the books I read:

(a) A Tale for the time being  РThis was shortlisted for the Man Booker in 2013, and seemed to be the popular choice tipped to be the winner, but ended losing.  It is set in the backdrop of tsunami which hit Japan few years back.There is a certain elegance and simplicity in the narration that keeps you curious about what is to come.The ending was a slight dampener for me РI cant say I am  a big fan of quantum physics interpreted by a practicing Buddhist priestess . However, the book is a really  good take on the ephemeral moments of life. I suspect this has a lot to do with the author being a Buddhist priestess- she is in her zone while articulating about things she probably understands best. A worthwhile read.

(b) One hundred years of Solitude  Overall,I liked this book. I say this almost grudgingly. Not for the faint of heart. Highly recommended for bald /balding people, who will not be able to pull their hair while cursing the author. The author, in my opinion, did not take his book seriously- and did not care a bit for the mental health of his readers.

I like that honesty.No pretensions about catering to anyone’s ¬†semi-intellectual-half-grown brain,and giving an illusion of increase in net intelligence. I suspect the author woke up each morning and started his day by asking himself “How can I make my book weirder,today?”, and succeeded mostly.I liked the book because it was weird. The genre is officially called “Magical Realism”¬†.To give an example,if ¬†I were to write a book about the emotional catharsis of ¬†homosexual satanic demons in Papua New Guinea¬†wearing pink suits, hopping upside down, on one hand, while holding a mobile phone to their ear – that would be magical ¬†realism.

The story is about events which occur across almost endless generations of a kooky,morally-challenged, intellectually-challenged(?) family in lala-land. Across generations people share the same names- Arcadio/ Aureliano. There is a ghost too, for good measure, and random weird things keep happening all the time. And there is non-linear narrative. So, you are never sure which  character is alive or dead.After a while you stop bothering about mundane things like life and death.Lots of fun.

(c) ¬†Kafka on the Shore¬† by ¬†Haruki Murukami¬†: This is an ¬†English translation of ¬†a Japanese work.I was blown away by this book. The genre it falls under is¬†“Surrealism”¬† .The story woven has layers upon layers upon layers. It is profound,hard-hitting,philosophical,and very weird, a.k.a surreal. Since this is an English translation of a Japanese work, there are certain minor language idiosyncrasies of the translation- which can mostly be ignored.

(d) India after Gandhi¬† by Ramchandra Guha¬†¬†: A nice book on the contemporary history of India. ( After independence). Very,very interesting and informative. Just take a look at the number of references the author has used to write this book! That’s some major hard work.Additionally, the author is honest enough to admit this- On very recent events, (late 90’s /2000’s)- he holds very strong views on what transpired. Hence, as a historian, his view would be clouded/biased- and should be viewed as such. For major events between 1947- 1980’s – it’s an amazing read- for instance, he would juxtapose events happening in the world- while something was happening in India-and why it was significant. For instance, in early 1950’s when India held its first general election, there were coups/wars/failed government world-wide- and everyone was speculating when India would disintegrate/government overthrown.

(e) Memoirs of a Geisha¬† : An enjoyable,time-pass book. Plot-wise, not very substantive-I think the central theme is heavily inspired from “Cinderella and her evil step-sisters”. The strength of the book is that it sheds light about Japan during the world-war era, and about Japanese culture at that time. The author is not Japanese,so I am not a judge of the accuracy or relevance of what he has written. So, if a Japanese dude were to ask me about my pet cobra or my pet tiger, I wouldn’t hold it against him if he face-palms when I ask him about the interesting Geishas he has run into.

(f) Dalai Lama’s cat¬†(Strictly speaking,I read this in 2014): ¬†A feel-good,short and sweet book. Uplifting when feeling upset/emotionally down.A nice read.

Revisited this book:

(g) Flow:The Psychology of Optimal Experience¬† : All about Flow¬† . Worth reading,just getting to know the general idea.It’s sometime verbose- I liberally skipped several parts.

(h) ¬† Tesla -Man out of time¬†: ¬†About one of the greatest ever minds to visit Earth in a human body.A nice read. On the flip-side-I ¬†think there was a time when Tesla was not recognized much for his contributions. Now, I guess if someone actually invents a time-travelling- machine the “Edison-sucks-Tesla-rocks-club” mob ¬†would come with knives and hand-grenades shouting

“But Tesla already did that-what’s new?” -ie, I think he is now deified,much like what Thomas Alva Edison probably was -in his heyday.

On Content based Publish Subscribe Systems

Sometime back, I had answered a question on stackoverflow about Publish/Subscribe Systems . While the direction of my answer was not exactly what the guy who asked the question was looking at (He was not interested in performance aspects/design trade-offs) , I think anyone who stumbles here might find the problem interesting : Here is  a gist of my answer on stackoverflow.

The general problem is called Content based Publish Subscribe , and if you search for papers in the same area, you would get a lot of results : For instance- this paper

Here are few things the system would need

1) A data-store for the subscriptions which needs to store: a)Store the list of subscribers b)Store the list of subscriptions

2) A means for authenticating the requests for subscriptions and the nodes themselves a) Server-Subscribers communicate over ssl. In the case of the server handling thousands of SSL connections – It’s a CPU intensive task, especially if lots of connections are set up in bursts.
b) If all the subscriber nodes are in the same trusted network, need not have ssl.

3) Whether we want a Push or Pull based model:

a)Server can maintain a latest timestamp seen per node, per filter matched. When an event matches a filter, send a notification to the subscriber. Let the client then send a request. The server then initiate sending matching events.

b)Server matches and sends filter to clients at one shot.

Difference between (a) and (b) is that, in (a) you have more state maintained on the client side. Easier to extend a subscriber-specific logic later on. In (b) the client is dumb. It does not have any means to say if it does not want to receive events for whatever reason. (say, network clog).

4) How are the events maintained in memory at the server-side?

a)The logical model here is table with columns of strings (C1..CN), and each new row added is a new event.

b)We could have A hash-table per column storing a tupple of (timestamp, pointer to event structure). And each event is given a unique id. With different data-structures,we can come up with different schemes.

c) Events here are considered as infinite stream. If we have a 32-bit eventId, we have chances of integer-overflow.

d) If we have a timer function on the server, matching and dispatching events,what is the actual resolution of the system timer? Does that have any implication?

e) Memory allocation is a very expensive operation. If your filter-matching logic is going to do frequent allocations/ freeing, it will adversely affect performance. How can we manage the memory-pool for this particular operation? Would we different size-buckets of page-aligned memory?

5) What should happen if the subscriber node loses connectivity or goes down? (a)Is it acceptable for the client to lose events during the period, or should the server buffer everything? (b)If the subscriber goes down,till what historical time in the past can it request matching events.

6) More details of the messaging layer between (Server,Subscriber) (a) Is the communication between the server and subscribers synchronous or asynchronous?
(b)Do we need a binary-protocol or text-based protocol between the client/server? (There are trade-off’s in both)

7) Should we need any rate-limiting logic in server side? What should we do if we starve some of the clients while serving data to few others?

8) How would the change of subscriptions be managed? If some client wishes to change it’s subsciption then, should it be updated in-memory first before updating the permanent data-store? Or vice-versa? What would happen if the server goes down, before the data-store is written-to? How would we ensure consistency of the data-store- the subscriptions/server list?

9)This was assuming that we have a single server- What if we need a cluster of servers that the subscribers can connect to? (Whole bunch of issues here: ) a)How can network-partitioning be handled? ( example: of say 5 nodes,3 nodes are reachable from each other, and other 2 nodes can only reach other?) b) How are events/workload distributed among the members of the cluster?

10) Is absolute correctness of information sent to the subscriber a requirement,ie, can the client receive additional information,that what it’s subscription rules indicate? This can determine choice of data-structure- example using a probabilistic data structure like a Bloom filter on the server side, while doing the filtering

11)How is time-ordering of events maintained on the server side? (Time-order sorted linked list? timestamps?)

12)Will the predicate-logic parser for the subscriptions need unicode support?

In conclusion,Content-based pub-sub is a pretty vast area- and it is a distributed system which involves interaction of databases,networking,algorithms,node behavior(systems go down,disk goes bad,system runs out of memory because of a memory leak etc) – We have to look all these aspects. And most importantly, we have to look at the available time for actual implementation, and then determine how we want to go about solving this problem


Rooting HCL ME Y2 and installing Google playstore

One of my friends has a HCL  ME Y2 Tablet, and asked me if I could root it.  To do the same,It was a good learning experience browsing relevant forums, and knowing more about how rooting works internally.

HCL ME Y2- is a relatively recently launched Tablet in  India- has ICS 4.0.13 ( Launch Date seems to be mid August 2012 over here ), and as such  does not find much love in XDA Developers  or Android Central forums   for  any queries of rooting  /Installing playstore, when compared to other  popular  brands.

For 15,000 INR,  HCL ME Y2 is a pretty solid piece of hardware-  I would have said it worth the price- but then, after the launch  Nexus  Tablet series by Google, for the equivalent amount of  money, Nexus 7 beats  it.

If you google about  HCL Me y2, you would find it has earned the wrath of several customers for the lack of  customers for the lack of App-store lock in by HCL.

Superoneclick   does not work. The equivalent methods of Rooting HCL me U1  Tablet  doing rounds, including some youtube videos didn twork too.

For Rooting the device the tool at did the trick.

In a nutshell, this is what I did Р I copied  Playstore apk to /system/app/ , and launched it from the installer.


If you do not have GoogleServicesFramework.apk,then app store will install but  exit immediately after it is started- and Damn- I spent so much time trying to figure out why!!!  ( Can refer this thread  where someone installed app store on Kindle   )


Some Fun with Hash Table performance analysis

I  was trying to solve a puzzle some  days back- and it  essentially could be mapped to the following question :

Given ¬†a hashtable of bucket size ‘M’ , ¬†continue inserting ¬†elements into the hash table till all the buckets have atleast ¬†one ¬†value ¬†associated with it. What is the ¬†estimated number of insertions that should be done to achieve this?¬†

The answer I derived corresponds  with a quick implementation I did  using a random number generator-  While this is not the best way to  infer correctness, a large magnitude of  difference over many trial runs should give of a warning sign.

So,here goes:

Let the size of the hash table =M, and the hash function have normal distribution.

Refreshing the definition of  Estimated value for a random variable:

E(X) : X1*Probability(X1) + X2*Probability(X2) ¬†…. + Xn*Probability(Xn)¬†

Let Xn  refer to the event of nth insertion in the same bucket.

E(X=2)  :  Expected  Number of insertions  for inserting 2 elements  are in two different buckets

E(X=3) :   Expected Number of insertions for inserting 3 elements in three different buckets, and so on.

The total required number of insertions= E(1) +E(2)+ E(3) ….. E(M)

So, Let the first element be inserted into the hash table.

The second time, Probability of another element going to the same bucket = 1/M

Third time, Probability of another element going to the same bucket in the second insertion=1/M * 1/M.


E(1) =1+ 1/M +1*(1/M^2)  + 1*(1/M^3) + 1*(1/M^4)   till infinity  etc  :

This is an infinte geometric progression,where the sum converges to    1 / ( 1-(1/M) ) =  M/( M -1)

This is for the first element.

Next,we proceed to a state where we have 2 elements in two buckets.

The expected number of insertions ,for  the keys to map on to the same 2 buckets will be :

E(2) =1+ 2/M + ¬†(2/M)*(2/M) ¬†+ (2/M)*(2/M)*(2/M) + …. infinity.

The sum of this series= M/M-2

Now, if we continue the same reasoning, and do summation of all the infinite series,we will have

M*(1/M-1 +1/M-2 + 1/M-3 …. ¬†1 )

The term within the summation is = (1+1/2+1/3+1/4 ….. 1/K) ¬†which is greater than ¬†ln(K+1) ¬†¬†

So,the expected number of insertions is approximately MLogM ¬† ¬†( Log to the base ‘e’ ¬†where M is the size of the hash table)