CBrauer
-Interested User-
Posts: 32
Joined: Aug 18, 2004
|
Posted: Jan 4, 2007 01:29 PM
Msg. 1 of 3
Hello,
I think your IQFeed Client API is fundamentally flawed. You have implemented a Fire hose-and-Bucket model. If the line is not fast enough, or if the application that gets data from the Client is not fast enough, data is lost.
Please consider implementing the well established Publish-and-Subscribe model. This will guarantee that data is not lost. I also believe that most of the problems that I see on the Developer Forums will also disappear with this redesign.
Charles Brauer
Charles Brauer CBrauer@CypressPoint.com
|
stargrazer
-DTN Guru-
Posts: 302
Joined: Jun 13, 2005
Right Here & Now
|
Posted: Jan 21, 2007 05:10 PM
Msg. 2 of 3
Would this be in addition to or instead of the existing api? The firehose is fast and efficient, and if an application is unable to keep up, any ats running on the data will not be trading efficiently. And if you take a look at how the exchanges publish their data, that is one massive firehose. Basically, they multicast their data: one stream is published and everyone pulls out of it what they need.
er... how does that differ from your suggestion?
|
CBrauer
-Interested User-
Posts: 32
Joined: Aug 18, 2004
|
Posted: Jan 21, 2007 10:32 PM
Msg. 3 of 3
Clearly DTN cannot abandon their current API. A new Publish and Subscribe system would have to be developed in parallel with the existing one.
A Publish and Subscribe system guarantees the delivery of the data. No data is lost.
This solves the problem of a noisy, unreliable internet delivery system that has not implemented the concept of "Quality of Service".
Take a look a www.Streambase.com and www.Tibco.com.
This thread is afterall just a Wish List...
Charles
Charles Brauer CBrauer@CypressPoint.com
|