DTN_Steve_S has contributed to 2042 posts out of 18778 total posts
(10.87%) in 4,720 days (0.43 posts per day).
20 Most recent posts:
Sorry for the delay answering this.
Yes, corrections are applied to historical data (daily or otherwise).
Unfortunately, there is no notification when this happens.
I have confirmed that these extra limits are on the streaming bars implementation.
The limits that were updated recently apply to Historical data requests but streaming bars adds the extra limitation on time series bars where the only valid interval values are:
All valid values < 300 (5min).
All values between 300 and 3600 must be divisible by 60 (so only multiples of 1-minute bars).
Only 3600 (1hr) or two hour (7200) bars.
I'll get the documentation updated to reflect this for now and discuss with our server/product people about the necessity of these extra limits for the future.
I don't believe there is another way to do this.
When you don't specify an end date/time, the servers use the time of the request. The servers also always work backwards in time to service the request. As a result, the servers simply find the first X datapoints that meet the criteria of the request and then stop processing data.
You will have to calculate the end time instead.
Hello, it looks like your account is not authorized for these symbols. These contracts are on NYMEX which requires an extra authorization for delayed data.
If you login to your account on our account management portal here ( https://myaccount.dtn.com/storefront/login ), you can add those auths to your account and the requests should start working within a couple minutes.
hunter, I'm having some issues finding information about this issue from 18mo ago and Tim is out of the office today (I'll chat with him tomorrow when he gets back).
Just to confirm this actually is the same issue. Can you give me an example request/symbol where this is happening for you?
Thanks for the feedback!
We will get these issues added to our issue tracker for future development efforts and as you said "keep working".
I noticed in our logs that you are using the first build of IQFeed 6.0. Although it won't address all of your issues, I do recommend upgrading to the latest build on our website. The issue this build will address is the news app not displaying headlines on initial load (but the other scaling issues remain). There are also many other smaller enhancements to the apps and fixes for stability after lost internet connection.
There are a couple changes to that file that could affect the reliability of the code (I am unable to tell since the changes call functions that aren't included). However, by modifying the same example app to adding a simple re-request loop that triggers when the ENDMSG is received, I was able to cycle through ~27K requests without error before I stopped it.
I'm not discounting the idea that the example code might have a bug in it that causes this. The example code is designed to simply demonstrate the communication and isn't very robust. Of course it is also possible that it could be a bug in the feed itself although that would be much less likely.
My recommendation to continue troubleshooting would be for you to examine the calls to BeginReceive and EndReceive to make sure they are always paired correct. For every call to BeginReceive, there should be exactly one call to EndRecieve and it should happen before the next call to BeginReceive.
I would also add an extra Array.Clear on the socket buffer array before calling EndReceive just to be 100% sure that any data you are processing is new off the socket instead of somehow leftover from the previous read.
You might also want to turn on logging in IQConnect (you can do this via the diagnostics app) to see what the feed is actually sending and compare this to what your app is receiving. Unfortunately, the logging in IQConnect will certainly affect performance of the feed and will likely result in a very large text file so it could be troublesome to work with.
Edited by DTN_Steve_S on Apr 27, 2018 at 01:48 PM
Hello, this is almost certainly a case of either not properly dealing with partial messages you read off the socket or not properly clearing resources between requests.
By "partial messages", I'm referring to when you read from a socket into a buffer, there is no guarantee that the last characters read will be at a message break which means you have to save off the partial message you received and prepend it to the beginning of the data on the next socket read.
Henry, I sent an email to your registered email address with this information.
Feel free to reply to that email if you need additional information.
In that case, I would try splitting to up to 2 connections with 150 symbols each or 3 connections with 100 symbols each and see if the problem goes away.
Can you give an example of the dropped bars issue?
No, I wouldn't split this out the extent of one per symbol. I would try to get it working with a single connection first and then only scale it out if you need to, starting with 2-4 connections and working from there.
The streaming bars can actually handle multiple intervals of the same symbol and it can also handle multiple symbols (of the same or differing intervals) on the same connection as well.
You simply need to specify a unique value in the RequestID on each of your bar watch requests (so submit 4 bar watch requests for each symbol, each with a unique RequestID).
With that said, you might run into some issues dealing with performance based on your description of what you need if you put everything on a single socket. Since it's entirely hardware dependent, I can't say for sure if you will run into problems but I just mention it so you're aware that it could come up. If you do have issues, and assuming you aren't saturating all cores on the CPU, you will need to spread your symbols across multiple connections to streaming bars. Streaming bars is multithreaded on a per client basis so each connection you make to streaming bars will be handled on it's own thread. If that becomes an issue, make sure you keep all intervals of a single symbol on the same connection for maximum performance.
Sorry for the delay in response here.
We calculate VWAP the same for all symbols that have it available.
The VWAP value takes into account all trades (and all trade volumes) for the current day (resets on the first trade of the day)
A good example currently is the symbol GOOG which as of this post, only has 13 trades thus far today.
Price Size Type
1051.51 100 E
1051.3 99 O
1051.3 1 O
1051.3 100 E
1051.14 4 O
1051.23 74 O
1051.23 7 O
1051.23 16 O
1051.23 100 E
1050.89 3 O
1050.89 1 O
1041.16 4 O
1048.5 1 O
The current VWAP value is 1051.22
Sorry for the delay Craig. This is not available in the API.
I responded to this via email but I'm also responding here for the benefit of future readers.
IQFeed restricts all connections to the local machine due to restrictions by the exchanges prohibiting redistribution of data.
Hello, I got your messages via email. Unfortunately I don't have an answer for you however, I will say (mostly for future readers of this post) that the format that should be adhered to for these fields is CCYYMMDD HHmmSS (with a single space between date/time). I understand you're just troubleshooting and it's good information that the space seems to be what is corrupting your requests but I'm not sure how currently. At this point I think we've determined it has to be something in your setup.
Do you have the ability to compile/run any of the supplied example apps we distribute with the feed? If so, do they show the same issues? If not, it looks like your account is authorized for our DTN.IQ client. http://www.dtniq.com/template.cfm?navgroup=supportlist&urlcode=33&view=1
Can you test using this (open the Time & Sales and configure the request to use a date range).
Also, which version of python are you using? and do you have any environment variables set that could be altering the run of the app?https://docs.python.org/2/using/cmdline.html#environment-variables You might try running your app with the -E parameter just to make sure that isn't the issue.
This information is not in the Nasdaq Level 2 feed that we receive. As a result, to get this information, it would require bringing in a new feed from exchange(s) which is not a small task. However, we do track requests for additional data to help us measure demand. If you would like to PM me your IQFeed loginID, I'd be happy to submit a request for this information on your behalf.
Craig, sorry for the delay in responding.
What you are showing here is expected behavior. The bid/ask quotes for these equities come from the exchange on a different feed and we merge them in-house before forwarding them to customers. Any effort to assure they are in chronological order would result in intentionally delaying at least one of the feeds while waiting for data from the other feed. Instead, we opt for the option that gets data to customers as quickly as possible. As a result, you will occasionally see discrepancies between trades timestamps and bid/ask timestamps in this scenario.
Good catch. We do not provide the interest rate within the feed. Our chains display leaves this field up to the user to populate.
Hello, sorry for the late response. We do not provide the greeks themselves within the API but there should be sufficient underlying data within the API that you can do these calculations within your application.
We also provide an option chains display within our DTN.IQ product (your account might already be authorized to use this) that shows greeks but they aren't available programatically.