There appears to be some sort of limit or bug that is introducing limitations in my agents ability to query embedded objects within the vector database (I assume that is how Taskade is working as it certainly mimics one).
I have been very thorough with my arrangement of data that I have uploaded to the Agents knowledge, going as far as multi-indexing data extractions and enrichments from the target file, embedding separate indexes that act as knowledge graphs or navigation references and when it 'works' - it works very well; but the agent is capable of producing aproximately 2-3 accurate responses before the agent starts stating things such as "it would seem information on that topic is not directly available in the provided knowledge base at this time" when it very much is and the agent certainly has direct instructive context to inform its queries.
Is this a built-in limitation on the Taskade platform? If it is, it will mean that myself and my colleagues wont be able to use this platform as it ultimately means that we would have too much disruption or unreliability for us to use it.