Join the 80,000 other DTN customers who enjoy the fastest, most reliable data available. There is no better value than DTN!

(Move your cursor to this area to pause scrolling)




"I am very pleased with the DTNIQ system for quotes and news." - Comment from Larry
"I just wanted to let u know that your data feed/service is by far the best!!! Your unfiltered tick data is excellent for reading order flow and none of your competitors delivers this quality of data!" - Comment from Peter via Email
"I "bracket trade" all major news releases and I have not found one lag or glitch with DTN.IQ feed. I am very comfortable with their feed under all typical news conditions (Fed releases, employment numbers, etc)." - Comment from Public Forum
"I've been using Neoticker RT with IQFeed for two months, and I'm very happy with both of the products (I've had IQFeed for two years with very few complaints). The service from both companies is exceptional." - Comment from Public Forum
"And by the way, have to say this. I love the IQFeed software. It's rock solid and it has a really nice API." - Comment from Thomas via RT Chat
"I have to tell you though that using the IQFeed API is about the easiest and cleanest I have seen for some time." - Comment from Jim
"It’s so nice to be working with real professionals!" - Comment from Len
"Thanks for the great product and support. During this week of high volume trading, my QuoteTracker + IQ Feed setup never missed a beat. Also, thanks for your swiftness in responding to data issues. I was on ******* for a few years before I made the switch over early this year, and wish I had done it a long time ago." - Comment from Ken
"IQ feed is brilliant. The support is mind-bending. What service!" - Comment from Public Forum Post
"Can I get another account from you? I am tired of ******* going down so often" - Comment from George
Home  Search  Register  Login  Recent Posts

Information on DTN's Industries:
DTN Oil & Gas | DTN Trading | DTN Agriculture | DTN Weather
Follow DTNMarkets on Twitter
DTN.IQ/IQFeed on Twitter
DTN News and Analysis on Twitter
»Forums Index »Archive (2017 and earlier) »IQFeed Developer Support »Rookie: min hardware requirements (1000+), sockets versus COM & how to best set up things at start...
Author Topic: Rookie: min hardware requirements (1000+), sockets versus COM & how to best set up things at start... (6 messages, Page 1 of 1)

Dragan
-Interested User-
Posts: 3
Joined: May 9, 2007


Posted: May 9, 2007 09:28 AM          Msg. 1 of 6
Hi,

Few questsions, mostly performance wise:

- what's the min hardware configuration for watching ~1000+ symbols (+ occasional history requests). Talking about the data-feed dedicated machine (so nothing else on it much). I'd also be interested in hearing from experience, first-hand. Is dual CPU a requirement or not? Currently I have one dedicated machine hosted - 2.26GHz Celeron Processor, 512MB RAM, 250GB Bandwidth - I'm planning on expanding, possibly taking '3.06GHz P4 HT Processor' but just wanted to hear about any experience. Also is it possible to run it on two machines, thus spreading the load, talking about the developer version that is,

- From the performance perspective, is there any difference between using sockets/TCP/IP and COM interface? What's recommended?

- Other things, advices?

Thanks everybody,
Dragan

DTN_Steve_S
-DTN Guru-
Posts: 2093
Joined: Nov 21, 2005


Posted: May 9, 2007 10:22 AM          Msg. 2 of 6
Firstly, some clarification for the benefit of everyone reading this thread. Number of symbols is not necessarily a good indicator for the "load" that your application needs to be able to handle. Are you meaning 1000+ most active symbols across all markets? Or are you meaning 1000+ most active on one specific market? Or is there some other indicator that you are using to choose your symbols?

From my experience:

CPU:
There is little gained from running IQFeed on a dual processor/dual core/HT machine. I mentioned in another thread that IQConnect handles all level 1 data in a single thread so the only advantage of multi-threading capabilities would be for your history requests.

RAM:
While operating under normal conditions, I have never seen IQConnect take more than about 20-30MB of RAM. If you are experiencing more than this, it is likely an indicator of some other problem.

Bandwidth:
In another thread here recently, one of our developers posted that watching 1300 NYSE symbols (we can probably safely assume "most active" here) that his application processes about 8GB of data a day. This is going to be "Internal" bandwidth between IQConnect and your application. Your "External" bandwidth (from the servers to your computer) is quite a bit less. I do not have exact figures but I would guess that is probably less than 1GB.

Latency and packet loss are much more important. While they are usually directly related to bandwidth, most of the connection related issues that I end up investigating are latency or packetloss related. If your latency to our quote servers is greater than 200-250 range, you will almost certainly have problems watching 1000+ active symbols. Packetloss, generally, is unacceptable in any amount. Occasionally you will drop a packet or 2 on just about any connection, but if you start experiencing it with any regularity, it should be investigated.


Sockets vs COM:
All communication to the servers is via sockets using tcp/ip. The COM interfaces simply encapsulate the socket communication to our servers.

So, in theory, you should see very little difference in performance between the two.

--Edited to add.
It is not possible to "distribute" the load to more than one computer using IQFeed without having a second login account.
Edited by DTN_Steve_S on May 9, 2007 at 10:24 AM

Dragan
-Interested User-
Posts: 3
Joined: May 9, 2007


Posted: May 9, 2007 04:57 PM          Msg. 3 of 6
Steve, I greatly appreciate your response, it’s exactly what I needed – and if I could just ‘abuse’ your time some more and ask a few specific questions (and that should be all for now I think), hope you don’t mind…

First to answer your question, here is the exact ‘specs’ on symbols we’re after:

Exchanges: NASDAQ, NYSE, AMEX mostly
Symbols All that are “above $2 (some average) and with average volume over 75000”
… that’s almost 2,500 stocks, and heavier ones, but we’re scaling it down to over 1000, 1800 or so to start with (currently we have the 500 limit, the basic developer one, but planning on expanding on that very soon, that extra services of yours).

Architecture (in brief, just FYI and any input appreciated): We’re going to keep/build our own history as we receive the data from you, thus cache everything and use history requests only as means of getting the old data and what’s missing, rebuilding/updating daily (after-hours) and so on. This should speed up things, minimize on bandwidth, also the load on your server. So, most of the time we’re mostly streaming/watching the symbols, and w/ occasional requests + end-of-day maintenance. Plus, I’ll be doing more or less what that other developer mentioned already (topic 1550), all the usual stuff, so just processing and emptying the queue ASAP and further processing it in the background.

CPU/Multithreading: I read that (http://forums.dtn.com/index.cfm?page=topic&topicID=1550), but it’s clearer now, just wanted to re-check. And by ‘having a second login account’ (distributing the load over multiple machines) you mean 2 separate accounts, fees, right? I see that you have an option of buying additional 500 symbols, but there is no way to ‘split’ that over 2 machines/IP-s, you don’t offer anything similar? I understand, makes sense, but just to make sure what our limitations are.

RAM: That’s good to hear, I was more thinking along the lines of memory I need myself to process things, cache (short-term, just some immediate data), before dumping it somewhere. But I was also afraid of your app overusing the memory, so that’s good news.

Bandwidth: 8GB, yes, read that also. But what you’re saying is that it’s actually compressed (or just binary versus plain-text) as sent from you/servers to IQConnect (your ‘client’ sitting on our machine) and that’s more like 1GB – as opposed to the data communicated in between the IQConnect and my app (which is e.g. around 8), right? Talking rough figures here, got that. Btw. is that for level-II? That’s some more I guess.

Latency, packet loss – makes sense. Ping to your servers (login.interquote.com in fact, but that’s what you’re using) shows ~50ms, up to 60ms (it’s ~5Mbps or so), that’s ok I guess. But has to be tested w/ some more load, also the quality of the line and all that.

Symbol limit: if we e.g. have 1800 symbol limit – and we’re watching the 1800 regularly. Is it permitted to download occasional history requests for some of the other symbols (not watched)? How that works? Basically I was thinking of using up all the available symbols for constant watching/streaming and let history requests separate to that.

C++ versus C#/.NET: And just one more – any experience or recommendations. I know the C++ is a much better/faster option (I’m a long-time C++ guy, so that’s not an issue) but C# much easier to work w/. So, I’m wondering if it’s worth the effort. If from C# and over COM (marshaling things over, or either way there is some involved) it’s probably introducing some delays, given the amount of data. Also the overall stability in both cases. Anybody did any tests with this? Or first-hand experience. I’m about to do it myself, but thought maybe somebody did that already.

Thanks in advance!
D
Edited by Dragan on May 9, 2007 at 05:01 PM

kdresser
-Interested User-
Posts: 71
Joined: Nov 25, 2004


Posted: May 10, 2007 04:13 AM          Msg. 4 of 6
What I'm doing now sounds similar to what you've got planned. My experience matches Steve's information.

I'm watching 1000 most active Nasdaq symbols with 2 accounts on 2 machines sharing the same internet conx. Historical reasons only. In total, I get about 5GB per day uncompressed data sent from the 2 IQConnect managers to my 2 frontend apps, via TCP/IP. After some initial checking and conversion of just a few of the many fields to binary, I end up with about 150MB per day of binary data. I do not get extra L1 data from regional exchanges, and I do not receive any L2 data.

Architecture: I've split things into several programs / processes. The 2 frontend processes on two separate machines receive, log the raw feed to a text file, check & clean up the data, assign a numeric key to all symbols, pull out a few interesting fields, convert them to binary and send the binary, via TCPIP, to a data consolidator.

Data consolidator process could be on a third machine, but runs on one of my two receiving machines and turns the tick stream into minute bars and stores them into shared memory.

A couple analysing processes look at and try to make sense of the shared memory data.

CPU: Both my machines are 2.5GHz 2GBmemory plain boxes. Win2K Pro. Even the box with 1 receiver, the consolidator and a couple data analysers doesn't run out steam -- it spikes to 50% CPU useage whenever my analysers sweep through all 1000 symbols.

RAM: IQConnect, as Steve says, isn't a factor.

Bandwidth: Because of the compression, musch less than 8GB / 1000 symbols is flowing on the internet. Unless you have some unusual ISP throttling I don't think this will be an issue.

Latency: when things go wrong (every year or so), it's usually traceable to high latency and the culprit is usually some stinky router along the path to IQFeed. High latency will cause IQConnect to get confused and start sending garbage to your app. Your frontend app should be able to detect this and raise alarms. It should also watch for a delayed feed. Sometimes IQFeed gets behind the market, but it is usually within a second of the expected timestamps. Your frontend app should be prepared to drop and add symbols that have gone "bad" due to latency problems.

ProgLang: Go with what you think best in. Algorithms matter more. My stuff is in Delphi. 2 programmers worked on it. The first did a quick and dirty proof-of-concept front-end that ended up using more CPU than IQConnect to do it's task. The second programmer rewrote it with better algo's and more careful character & string manipulation and it now consumes much less CPU than IQConnect. All communication between IQConnect, the front ends, and the consolidator is via TCP/IP using the open source Indy library. I have not evaluated or used COM for this. I'm sure that all speed differences between languages and communication methods will be swamped by the effects of program design, so C# and COM will probably work fine. Start with a simple front-end POC. You will have to be good and thorough at multi-threading your front-end process to keep system & GUI delays away from the receiving, logging, checking, cleaning, and binary forwarding subprocesses.

DTN_Steve_S
-DTN Guru-
Posts: 2093
Joined: Nov 21, 2005


Posted: May 10, 2007 08:54 AM          Msg. 5 of 6
kdresser added a lot of useful information here (thanks).

To answer some of your specific questions:
Quote: And by ‘having a second login account’ (distributing the load over multiple machines) you mean 2 separate accounts, fees, right? I see that you have an option of buying additional 500 symbols, but there is no way to ‘split’ that over 2 machines/IP-s, you don’t offer anything similar? I understand, makes sense, but just to make sure what our limitations are.
Correct, 2 separate accounts running on different Operating systems and 2 sets of exchange fees. This is a requirement from the exchange. I know that at least one of our developers use a single machine for more than one account by running one within a virtualized OS (VirtualPC/VMWare/Parrallels) but I highly doubt they are doing heavy data loads on either instance.

Quote: Is it permitted to download occasional history requests for some of the other symbols (not watched)? How that works?
History requests do not count against your symbol limit so the occasional history request is certainly allowable. However, we do ask that if you are doing a large number of requests (for example in a batch process), that you wait until after market close when customer load on the servers is greatly reduced. From your initial post, it seems that the way you intend to use the history requests should not be a problem.

Is there anything that is still unclear at this point?

Dragan
-Interested User-
Posts: 3
Joined: May 9, 2007


Posted: May 10, 2007 01:05 PM          Msg. 6 of 6
…on history requests, yes, I’ll handle that gracefully and no batch processing, and if any more required (updating of things etc.) then it’d certainly be in after-hours (also to minimize load on our side). So, all in all it should be a ‘well-behaved client’,

...thanks to you both. It helped a lot and confirmed what I had in mind.

And nothing else for now, this is enough to get me started. But I'll be back with some more, once I start processing the data, updates/splits etc., I'm bound to run into some problems sooner or later.

Thanks and Best
 

 

Time: Sat May 4, 2024 2:31 AM CFBB v1.2.0 8 ms.
© AderSoftware 2002-2003