> Yes, something similar but not quite the same. I am thinking about
> having an event driven architecture of some sort, but I certainly do not
> want to go the same route asyncweb folks did with regards to memory
> management. As far as I know asyncweb buffers content in memory and can
> be prone to 'out of memory' conditions even when serving moderate
> amounts of data under heavy load. In my humble opinion this is way worse
> then dropping incoming connections due to the worker thread pool
> depletion, because the former gives the clients a very clean and
> reliable recovery mechanism, whereas the latter does not. Dropping the
> connection due to the out of memory condition after having processed the
> request while sending out the response is a complete insanity.
I was already wondering what drawbacks asyncweb might have.
The story of never blocking anything just sounded too good.
> So, in my opinion there are several options we could pursue.
> (1) Never ever block I/O in HTTP service. As a consequence always buffer
> content in memory. This approach is flawed, but is relatively simple.
> (2) Always block I/O in HTTP service when serving potentially large
> entities in order to prevent session buffer overflow. Requires a worker
> thread per large entity content stream.
Based on my limited understanding of NIO, this option sounds best.
It should allow for both blocking and non-blocking operation, with
a mix of buffering and non-buffering. Or am I getting something wrong?
> (3) Do not use streams. Use callbacks for I/O events that take NIO
> buffers as parameters.
This *sounds* good, but somehow I don't buy the story. Our entities
are based on streams. File IO is based on streams. Many developers
are familiar with streams. There's nothing wrong with having callbacks
as an option, but I'd rather not have them as the only option.