||May 7, 2004 01:04 PM
||Dec 1, 2021 11:56 AM
||Jan 20, 2022 12:45 PM
taa_dtn has contributed to 137 posts out of 20453 total posts
(0.67%) in 6,471 days (0.02 posts per day).
20 Most recent posts:
Hmm. I haven't run into this problem on my system (Wine 6.21 on Fedora 34). However, I'm still having issues with iqconnect failing to shut down after the last client closes, so I'm killing it explicitly and restarting early each day. Do you have any unexpected iqconnect, explorer, etc. processes lying around?
Thanks for the reply, Gary.
I've written a toy app that exhibits all the correct (expected) behavior, so I'll try to nail down what's different between that and my real apps. I don't have much time available this week, so my next update here might take a while longer.
Now that I have an example working, it's clear that the problem is unrelated to the protocol and client software upgrades.
For reference, I start iqconnect.exe using fork() and execl(), passing a path to the executable and all the appropriate arguments. I wait a few seconds for it to initialize and then I connect. This has worked well for years. CrossOver has never been necessary.
My concern is that iqconnect is still running, sending data to a socket that it thinks is still connected to a client even though the client process is long gone. This is very likely to cause problems somewhere down the line. At the very least, iqconnect's CPU demands are going to get progressively larger over time as clients connect and disconnect. At the worst, it could block and be unable to accept new connections. Plus, the widget tray fills up with multiple connection manager icons, some of which don't respond to mouse clicks, which is ugly and confusing.
I still have no solution to this, though a few things I've observed suggest the problem might be related to the way I start iqconnect. I've tried several different methods, and they all misbehave, but some in different ways than others. The common behavior for all of them is that iqconnect thinks it's still connected to a client even after the client (a) closes the only socket it has open to iqconnect, or (b) exits entirely.
FWIW, this problem doesn't exist when using protocol 4.9, but does with 6.2.
So, what is the recommended way to start iqconnect under Wine?
I'm in the middle of a long-overdue protocol upgrade (to 6.2) and I'm seeing some odd behavior that I wonder if anyone else has encountered.
The problem is that long after my app has exited, iqconnect.exe is still running. Diagnostics.exe client stats claims the app is still connected, and still receiving data on the Level 1 port. However, I've confirmed that my app is definitely not running.
ShutdownDelayLastClient is set to a reasonable value (10 seconds), but I doubt it applies, since iqconnect.exe thinks it's still talking to a client. I see that start.exe, conhost.exe, explorer.exe, and iqconnect.exe are all still running, though. Manually stopping the feed closes everything down as expected.
All this is on Fedora 34 Linux using Wine 6.18, so that might be a factor.
Seconding BottomFeeder's request: Is there any estimate for when the historical data will be updated?
Just FYI, today I was seeing the response '!ERROR! Unknown Server Error code 0.' during history fetch from servers in the IP range 22.214.171.124/24. This occurred repeatably. I worked around it by adding a small delay between history requests.
Follow up: Fedora recently upgraded to Wine 6.0, and IQConnect seems to be working again. I'll let you know if the problem reappears.
Back to normal performance levels today.
Hi, Gary. Thanks for the update!
Just to clarify, is the rate limit per-connection, or per-IP-address (or some other key)? Since I have multiple threads issuing requests on separate connections, there are conditions under which more than 50 requests could be issued over the course of a second for all the connections in aggregate, though not for any individual connection.
I implemented a request-rate limiter last week, just in case. Maybe the observed slowdown was unrelated to the changes at your end. We'll see what happens today.
Following up with some data from my logs. Minor correction; I see there are 7 active threads, not 8; the code says I chose that to stay under half of IQFeed's suggested limit of 15.
On 2020-12-22, my history fetch showed mean latency (time from sending request to arrival of first data record) of 0.22 seconds. Min was about 0.1, median about 0.24, max about 0.6. Mean data transfer time (from arrival of first data record to arrival of last data record) was 0.09 seconds. Min was 0, median about 0.04, max about 10.2.
So, some equities generate a lot of data. For data transfer alone, the best average completion rate I could ever achieve would be about 80 requests per second. On the other hand, some equities trade very thinly, so their transfer time is essentially zero, and the peak request rate could be extremely high. This is what I mentioned in my earlier postings.
On 2020-12-23, the mean latency was 1.22 seconds (!); min 0.4, median 1.0, max 3.5. Mean transfer time was 1.9; min 0, median 0.04, max 118 (!!!).
Things got really slow on 2020-12-23 for both latency and throughput. Maybe it was just an odd coincidence that this happened the same day we received notice of the new rate limiting algorithm. I'm curious to see what happens tomorrow.
Hi, Mathieu. I don't object to showing you the code (other than embarrassment because some is old and crufty), but it's straightforward. This is a C++ socket-based app that has existed for more than a decade being run on a capable modern desktop machine with an AT&T gigabit fiber connection and no recent changes to the code or to its environment, so an overnight factor-of-six slowdown suggests to me that the new rate-limiting on the feed might be having more effect than intended.
Until the new rate-limiting started, I was averaging 12 completed requests per second, which seems pretty reasonable to me. Some loss of performance is caused by round-trip latency (for simplicity and error recovery, as well as compliance with previous IQFeed policy, each thread keeps only one request in flight at a time). But I suspect the main reason for the low average rate is that for some equities, an entire day worth of ticks takes a significant amount of time to prepare, transmit, and process. It's not just a matter of the maximum rate at which the app is allowed to issue history requests.
Hi, Gary. I'm confused about the purpose and implementation of the new limit.
Each day after the main trading session (at 7PM Eastern) I fetch tick history for all the symbols I'm screening. Currently there are 4090 of them.
My first reaction on hearing about the new rate limit was "That's fine; I'm OK with a lower limit, though I doubt I'm hitting 50 requests per second as it is." I run 8 threads, each of which performs history requests sequentially, so there are at most 8 requests in flight at any given time, and it takes a while for each one to complete and a new one to be issued. (The data has to be received, reformatted, compressed, and written to disk before a new request is issued.)
Yesterday, for example, the history fetch took 339 seconds, for an average of about 12 completed requests per second.
Today the history fetch took 2031 seconds, for an average of 2 completed requests per second. Wow; that's quite a change. It's slower than the days when I had a 6Mbps DSL line.
My best guess is that the majority of history requests finish quickly, so they're hitting the rate limit and are being delayed by as much as a factor of 6. However, a fair number of requests take a long time to finish, which drags down the average way below the nominal rate limit. High peak rate, low average rate.
So I'm curious as to whether your intent is to limit the rate at which requests are initiated, or the bandwidth demand on the servers. If the latter, the current approach may be too drastic; my connection is idle more than 80% of the time.
Thanks! If they need more information, I'll be happy to help debug.
FYI, I've downgraded Wine to version 5.5 (the most recent older version available in the Fedora 32 repositories) and everything works correctly. This will keep me going for a while, but eventually I'll need a more solid fix.
Over the weekend I did a routine OS (Fedora 32) update, and Wine version 5.22 was installed. Now iqconnect won't launch. I've attached the backtrace from attempting to run diagnostics.exe.
Any suggestions for tracking down the problem?
@karthikkrishan: I start my monitoring app when my machine boots. That can be done manually, by a command in rc.local, or by a systemd service. (I haven't done the last, though the configuration process looks pretty reasonable.) The monitor runs until the system shuts down, though it can be killed and restarted manually.
The monitor can be commanded to remain active 24/7, which is useful during development, but normally it sleeps while the market is closed and wakes up shortly before the open. It starts up IQConnect by forking and execing
/usr/bin/wine /home/home-dir/.wine/drive_c/Program Files/DTN/IQFeed/iqconnect.exe -product product-name -version version-number -login iqfeed-username -password iqfeed-password -autoconnect Then it opens the streaming port and communicates with the usual protocol.
The monitor has an exponential-fallback procedure to attempt reconnection in case the connection is lost, but I'll skip that for now. It's not clear to me that it's needed to the extent that it used to be.
At the end of the day the monitor automatically forks and execs other apps to collect the day's history and fundamentals. Multiple connections work fine; for example, I usually do history fetch with seven clients running in parallel.
During the day I start my other apps manually for analysis and trading. They just assume IQConnect is running, connect to the appropriate port(s), and go. Obviously you could do something more sophisticated, but I haven't found it necessary, and having a single "master" process to handle error conditions simplified things when I was writing the code years ago.
At the end of the trading day the monitor closes the last connection and after a while iqconnect.exe exits. Just to make sure everything stays sane for the next day, the monitor shoots it down with
killall iqconnect.exe if it's hanging around unexpectedly.
So that's an overview of how it all works. IQFeed is a stronger, more reliable product than it was when I started this long ago, so you can probably get by with a simpler infrastructure nowadays.
I've been using IQFeed under Wine on Fedora successfully for a very long time. My usage pattern is simple, though -- a "monitor" app starts up before the opening session and stays open until the close, while apps performing specialized functions connect as needed during the day. The monitor app is the only one that tracks and manages the status of the service.
I'd certainly like a native API, and in the past have offered to help out with an implementation, but I understand the need to keep the support load under control.
My 64-bit app just executes a 32-bit stub that starts IQConnect. Yes, it's stone age.
Thanks for the clarification!
Hopefully Tim or Steve will correct me if I'm wrong, but my understanding is that it's not possible to run IQFeed completely headless. You can configure things so that the username and password are provided automatically, without creating a dialog window. (That's the way my setup works.) However, as you noticed, there will still be an applet in the tray for the duration of the run.
The MIT-SHM extension is used by GUI toolkits to speed up the transfer of images from the application or toolkit to the X server. I've forgotten which toolkit IQConnect uses (maybe it's Qt?), but apparently it attempts to initialize or use the extension.
You can see from the xdpyinfo output after the line "number of extensions: 23" that MIT-SHM is not among the extensions provided by Xming.
I forgot to mention that you should export the environment variables after setting them. Did you do that anyway?
You're correct that there is no Linux executable for IQConnect; we have to use Wine.
You should be able to traceroute from your Linux box to your Windows box even though the Windows box isn't directly accessible from the Internet. It should still be directly accessible from your Linux box. If it's not (for example, because a firewall setting prevents it), that could account for the ICMP failure.
I haven't looked into this in any level of detail, but it appears there are a couple of problems.
First, running remotely (over ssh?) means that the MIT-SHM extension won't be usable. It's a shared-memory image transfer extension for improved local performance, and that doesn't work over the net. I suppose it's also possible that Xming doesn't support it; try the
xdpyinfo command to see if MIT-SHM is one of the supported extensions.
A quick Google suggests that setting environment variables to disable use of the extension might help. Before running IQConnect, try this:
The ping failure is something different, but at the moment I'm not sure what. Firewall issue? If you
traceroute from your Debian machine to your Windows machine, what happens?