Model Refuse Answering Prompts
This model performed fine in my basic MCP tool calling test but just as important for a model to be able to call tools is not call tools and answer directly when required.
Another basic tests I do with any local model is give it some tools and see if it will reply directly when prompted with simple trivia questions. This model didn't unnecessarily call any tools in the 100 prompts I tested here but instead it refused to answer the question 30% which is the first I've seen something like this.
Log: https://gist.github.com/kth8/44362ce50015182bfcb2a0ec0cc46f2b
Thanks @kth8 , this is interesting data. I cannot be sure, and need to dig into this a little more, but I wonder if we over-fit the model to use the specific MCP tools, which has degraded its ability to call tools outside of its training data - in some ways that might be a desirable outcome, in others not.
I wasn't testing to see if it can call any tools in this test but the thing I noted that was weird was it refused to answer simple direct questions with response like
I cannot answer that with the tools I have. My current capabilities are limited to providing weather, air quality, time conversion, random number generation, and secure token generation. I don't have access to geographical databases or factual knowledge about rivers.