[Rxtx] close != flush and may != close!
joachim at buechse.de
Fri Sep 29 03:13:41 MDT 2006
Gregg, I also think that we are talking past each other, let me try
to restate what I want to say.
If javax.comm.Port.close() tries to flush, no application layer above
can implement abort on any platform. Port.close() can not guarentee
that all data will be send, hence it should not even try. Port.close
() should have the semantic of abort. It should be non-blocking (per
API definition) as this simplifies application logic and finalization.
What I suggest will behave exactly identical on all OSs:
- javax.comm.Port.close() returns within a guarenteed small delay.
- any succeding IS.read/OS.write will throw an IOException
- no more native read/write will be scheduled
- the resource will be released as soon as any already scheduled
native read/write has returned (no delay can be guarenteed)
It is up to the library implementation to guarentee this behaviour
(and it can be done easily). Some OSs can not unwind native read/
write calls on abort - that is okay and should not worry a
programmer using javax.comm. It is up to the javax.comm API to define
if read/write unwinding happens on the Java thread level or not.
Unwinding on the Java level can always be implemented with handover,
this is nice to have but I am NOT demanding that at all. If the
definition is "IS.read/OS.write do not unwind within a guarenteed
delay after Port.close()" that's perfectly ok with me. As I tried to
explain, this issue shouldn't be of big interest to most programmers
as the weaker definition can be catered for quite easily on the
application level but the behaviour should be defined in the API, so
that library implementors and programmers don't duplicate their efforts.
If you do not agree with the above, maybe you can provide a concrete
example where it will create problems or make things overly/more
Just to be very clear: I am not suggesting to add my "workers"
abstraction to the javax.comm API. This was just an example of how
non-unwinding read/write can be handled on the application level.
BTW: Socket and HttpUrlConnection are much better examples then Swing
where java code actually works cross-platform.
On 29.09.2006, at 03:14, Gregg Wonderly wrote:
> Joachim Buechse wrote:
>> Port.IS.read() and Port.OS.write() can be implemented to always
>> unwind at Port.close() (even though this may require the use of non-
>> blocking IO or even a separate thread depending on the features of
>> the OS).
> What I worry the most about is that your implementation on
> different OSes will
> behave so dramatically different that an application written to use
> the API will
> not work reliably. I.e. if different techniques at the source
> level are
> necessary to deal with OS and driver issues, because you chose to
> take advantage
> of some features on one OS that are not available on another.
> This is why I'd really like for the stuff that you are describing
> to not be the
> implementation, but to be a layer on top of the implementation that
> you or
> someone else can write/use to solve the problems that you have with
> the standard
> Look at the abstractions that Swing uses to mask the features and
> details of multiple different graphics implementations. Only by
> not manifesting
> the OS/graphics behaviors into the APIs is it possible to write
> swing code that
> is portable.
>> Even if read+write don't unwind, I disagree that the user hasn't
>> gained anything from a non-blocking close. A blocking Read or Write
>> might have finished before the execution of Abort, or as a
>> successfull result of Abort (read blocked by write on Palm OS), or as
>> a non successfull result of Abort. I still have to see an application
>> that reliably uses results obtained from a Context that was aborted.
> I think we are talking past each other on this issue. You are
> talking about
> lots of different OS and driver issues. I'm trying to suggest that
> while those
> are interesting and valid concerns, direct treatment of them
> doesn't belong as a
> visible part, or behavior of the API and its operational
More information about the Rxtx