Using Microsoft CoPilot

Apr 12, 2024
5
0
10
I encountered something curious while using Microsoft CoPilot I have used it about 3-4 dozen times for various in-depth research. While using it today it started responding in Chinese. I inquired as to why it would do that and it denied responding in Chinese. It said it had responded in English. I tried asking again but worded my inquiry differently. It still denied speaking in Chinese. Then when I asked something else it responded in a variety of gibberish and random numbers, symbols, and words. I went off and came back later and it responded in the same gibberish and then I mentioned it had spoken to me in Chinese the last time I was on and it responded as it I was inquiring learning how to speak Chinese. I'm wondering if these kinds of glitches are standard for such software or might the program try to stop you from using it any further for detailed information or certain kinds of information? I'm really puzzled. At first I thought it might be some kind of virus but I checked for all that and nothing seems wrong.
 
It is essentially in Beta, like all AI chatbots and is not suited for much beyond entertainment at this point. I would not recommend it for your particular use case as the results can sometimes be complete fiction. Or gibberish, as you just found...

From Microsofts Copilot FAQ:

What are the limitations of Copilot? How can users minimize the impact of limitations when using Copilot?​


The system only supports English. Inaccurate responses may be returned when users converse with the system in languages other than English.


  • Your bot must be created in the US region. Other regions, and languages other than English, aren't currently supported.
  • This capability may be subject to usage limits or capacity throttling.
  • Topics generated by the capability are not always perfect, and may not accurately reflect the logic you wanted to implement.
    • We have implemented mitigations to filter out irrelevant and offensive language from appearing in the configured topic, and the system is designed not to respond when offensive language is detected.
    • We also monitor output and the feedback that bot users provide to continually improve our content filters. These filters and mitigations are not foolproof.
 
It is essentially in Beta, like all AI chatbots and is not suited for much beyond entertainment at this point. I would not recommend it for your particular use case as the results can sometimes be complete fiction. Or gibberish, as you just found...

From Microsofts Copilot FAQ:

What are the limitations of Copilot? How can users minimize the impact of limitations when using Copilot?​


The system only supports English. Inaccurate responses may be returned when users converse with the system in languages other than English.


  • Your bot must be created in the US region. Other regions, and languages other than English, aren't currently supported.
  • This capability may be subject to usage limits or capacity throttling.
  • Topics generated by the capability are not always perfect, and may not accurately reflect the logic you wanted to implement.
    • We have implemented mitigations to filter out irrelevant and offensive language from appearing in the configured topic, and the system is designed not to respond when offensive language is detected.
    • We also monitor output and the feedback that bot users provide to continually improve our content filters. These filters and mitigations are not foolproof.
Thus far CoPilot had been very helpful with all kinds of information I was researching. It was like there wasn't anything it couldn't come up with and very quickly, too. Then it had to go get weird on me so I just find it a bit of a puzzlement.
 
I disable the feature on all my personal devices. I attempted to play around with CHAT GPT when it came out and found that attempting to conceive how it wanted the queries framed was more difficult than just searching for myself. I am sure there will come a time it will be crammed down my throat and/or take my job.
 
  • Like
Reactions: Order 66