||Nov 21, 2005 02:10 PM
||Dec 20, 2018 02:30 PM
||Jan 7, 2019 05:28 AM
DTN_Steve_S has contributed to 2062 posts out of 18870 total posts
(10.93%) in 4,805 days (0.43 posts per day).
20 Most recent posts:
Sorry for the delay responding here. These have been corrected.
Hello, we are looking into this.
OK, I understand now. You're talking about serial options. We do not have this data directly in the feed currently. However, the next version of IQFeed (6.1) will have a way you can figure it out indirectly.
We are adding a List of contract months to the futures symbol's fundamental data.
As a result, using @C as an example:
@CF19C3300 is a serial option.
In IQFeed 6.1, watching @C# will give you a list of contract months so you will be able to compare the option Month code (F) to the list of future contract months (MKNUZ) and be able to determine what the next one would be which the serial month would apply to.
Hello, sorry for the delay responding here.
I'm not sure I understand what you're asking for.
Future Options are issued based on a specific futures contract and expire at the same time (or close to).
The symbology also has the underlying contract specified as well.
is an option for @ESZ18
Have you updated your software to use the current protocol?
The 99:99:99 timestamps (and dates) were a way of signaling invalid values in older protocols.
I can't verify at the moment since it's currently trading but those fields should be blank rather than filled with 9s in the current protocols when they aren't valid.
Can you post the raw IQFeed output messages you get that show the incorrect dates?
Hello, I believe I also replied to some of this via email but I'll answer here as well for future readers.
1. I don't know of anything that should cause this described behavior. Please give me some examples of the exact request/responses you are getting from the feed in both scenarios.
2. This looks like you aren't specifying a protocol for the connection to use. For backwards compatibility reasons, the feed defaults new connections to the oldest supported protocol (4.9) which did not include subsecond timestamps.
3. We only store bid/ask prices (no sizes) information in historical data.
The data in the fields themselves is still correct. It's only the Message Contents field that is misleading.
As a result, the solution/work around to this issue would be to ignore the message contents field on summary messages and check the fields manually (again, this is only necessary for summary messages).
No, nothing changed on our side.
One possible explanation is if you were watching these symbols on another connection to the feed during the previous test and not during the current test.
The Message Contents field is populated based on the last message received from the servers.
In the scenario where one connection is already watching the symbol, and a 2nd client connects and watches the same symbol, the summary message is generated locally on your machine (cause it already has all the data there). However, the Message contents on the summary message for the 2nd client is not re-generated and instead will contain whatever the values that apply to the most recent message received from the server for that symbol (so in your example, it was a bid/ask update).
Slight correction here. The existence of the field flag in the Message Contents field does not necessarily indicate a change in value, simply that it was updated (sometimes with the same value).
In your scenario (initial snapshot summary message after having just sent a watch request), I would also expect the open flag to be set if the message had an open value populated.
If you watch pre-market open when the open field is still blank, the 'o' should populate in the trade message in which the open field populates.
Can you send me an example of this not happening (I just checked a few symbols and they all have had the 'o' in the Message Contents field)?
Hello Yair, our servers do observe DST. As a result,
EST/EDT depending on daylight summer time in New York
is the correct answer to question number 1. As a result, the answer to number 2 should also be:
That we are mistaken and the data content matches the reported time in New York.
Let me know what is causing the confusion for you and I'm happy to offer advice to clear that up.
Thanks for the logfile Matt.
This is a restriction of trial accounts (only 4 days of tick history and 1 year of daily data). As a result, your requests are getting converted to only request 365 days instead of 5000 behind the scenes.
I'm not able to duplicate this and, off the top of my head, I can't think of any scenario where you would be seeing this behavior.
Can you email me (to dev support) an IQFeed logfile with All Logging enabled? You can configure/enable logging in IQFeed using the Diagnostics app. Note that after running your test and saving the logfile, you will want to reduce the logging back to default levels for performance reasons.
Socket error 10060 is a timeout error on the connection from your machine to our servers.
Each request you make to the feed is handled in it's own connection to the server (Create -> Connect -> Request -> Receive -> Close).
As a result, if you're spinning through 3000 stocks, that is 3000 different socket connections to the server over the course of however long it takes you to make those requests.
We have investigated these sorts of reports a few different times over the years and we've never been able to replicate this reliably or identify a specific cause (all of our tests from both internal and external to our network have not been able to replicate).
If it's something that just recently started, you might try resetting your router/modem (most non-enterprise class hardware will occasionally hiccup under heavy connection load). Of course we are happy to help troubleshoot this with you but most of the time, since a simple re-request usually succeeds, most developers choose to simply implement that (as you have) and not worry about it anymore unless the problem gets worse.
OK, I'm pretty sure I've figured out what is going on here.
We have a maintenance process that runs right around that time in the early AM that clears the previous day's OHL values. My guess is that you caught it in the middle of that process running which is why only some of the symbols were affected.
If you startup sometime after 4AM eastern, you should be safe.
This sounds like the intended behavior of the feed. The client app on your machine is designed to disconnect and shut down if it isn't in use and it does an ICMP ping to the servers as part of it's shutdown process. As a result, the ICMP failure error message isn't the cause of the shutdown.
If you want to keep the app running, you have to maintain a connection to it.
Hello, sorry for the delay responding here.
I'm not seeing this same behavior currently (~6:45AM Eastern). Can you provide some specific examples of this?
In your example, All three sets of fields (Last, Extended Trade, and Most Recent Trade) will reflect the last trade price of 2500.00.
We have separate fields (Settle and Settlement Date) to display the settle information in streaming data. The settlement price is copied to the Close field but never the trade fields.
Message Contents on a summary message isn't very useful since summary messages populate all fields and the purpose of Message Contents is to tell you what was populated in the message.
Edited by DTN_Steve_S on Nov 1, 2018 at 08:55 AM
In the streaming Level1 feed, the "Message Contents" field will contain an 's' when settlements are sent. For all trades, this field will contain a 'C', 'E', or 'O'.
The same logic is true for Trades in historical data retrieval but the name of the field is "Basis for Last" in tick data requests (and settlements are marked with an 'S' instead of 's'.
The difference between the series of trade identification fields for "Last" and "Extended Trade" is that extended trades also include some FormT trades. As a result, you are correct that for Futures, those fields will contain the same values.
Edited by DTN_Steve_S on Nov 1, 2018 at 08:56 AM
Hello, we are continuing to monitor and make changes to address this issue. Friday was particularly bad since we had several different issues contributing to make the problem worse. We made some configuration changes on our servers over the weekend and continue to actively work with our providers to get the issue resolved.