News

Visual Studio Dev Vexed by Copilot's Obtuse 'Responsible AI' Refusal

Chances are you've been frustrated by advanced AI refusing to answer some questions, opaquely citing violations of security or "responsible AI" policies.

No matter what, some systems simply won't divulge exactly what in the current conversation caused them to balk, simply pointing to vague policies and generic guidance.

It turns out Visual Studio developers aren't immune to the problem, which can be even more vexing when they're trying to get some work done in the IDE with the original "AI pair programmer," GitHub Copilot.

In fact, in looking at Microsoft's Developer Community feedback site, for Visual Studio, the top open issue as measured by the number of upvotes is titled "Response is blocked by Responsible AI policy."

Response is blocked by Responsible AI policy
[Click on image for larger view.] Response is blocked by Responsible AI policy (source: Microsoft).

It seems simple enough, with no risky responsible AI policies in danger of being violated, reading only:

[severity:It's more difficult to complete my work]
I asked this question to copilot in VS2022: what is difference between TestInitialize and classinitialize. And the response is, "The response is redacted to meet Responsible AI policies. If this was unexpected, please give feedback.".

I don't know why this violates Responsive AI policy. I would like to suggest to provide more details about why it is blocked by Responsive AI policy, maybe a URI may provide the details.

That issue was folded into another one about continual "the response matched public code so it was blocked" messages. The thread also mentioned links being redacted with the message, "An external link was removed to protect your privacy." It's still open, with commentary posted as recently as Nov. 29.

The frustrating issue is well known, being examined back in 2023 in the paper, "I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box Generative Language Models." Its abstract reads in part, "Manual examination of responses reveals that refusal is not cleanly binary, and lies on a continuum," but doesn't mention how frustrating that can be.

That is done in the 2023 article, "Growing Concerns Surrounding Generative AI and Its Refusal to Respond to Specific Commands" published by DistilINFO Publications that says, "Refusals can be quite exasperating, and while humans often refuse to respond to certain questions, the emergence of generative AI that refuses to interact raises concerns in the fields of AI ethics and AI law."

As I actually use GitHub Copilot in Visual Studio Code to write my articles, I asked it about its guidelines regarding responding to prompts where questions of "responsible AI" might come into play, and it responded with that vague, obtuse, generic guidance mentioned above:

As GitHub Copilot, I follow Microsoft's Responsible AI principles, which include:
  1. Fairness: Ensuring AI systems treat all people fairly.
  2. Reliability and Safety: Building AI systems that are reliable and safe.
  3. Privacy and Security: Ensuring AI systems are secure and respect privacy.
  4. Inclusiveness: Empowering everyone and engaging people.
  5. Transparency: Being transparent about how AI systems work and make decisions.
  6. Accountability: Ensuring that AI systems are accountable to people.
If a prompt might involve sensitive or potentially harmful content, I will avoid providing assistance and respond with "Sorry, I can't assist with that."

The company's guidance is explained in full in "Principles and approach" documentation for responsible AI.

As advanced systems evolve, they will likely become more transparent about why they refuse to respond to certain prompts. For example, a 2023 Harvard Business Review article, in discussing "debiased and explainable AI," listed one principle for responsible AI at work as: "Explicitly outline the steps taken to remove, minimize, and mitigate bias in AI-powered employee interventions -- especially for disadvantaged and vulnerable groups -- and provide transparent explanations into how AI systems arrive at their decisions and actions."

So there is hope on the horizon, and it can come none too soon for developers who are stymied by "responsible AI" policy messages that don't provide any advice on exactly what the problem was or how they can avoid triggering refusals in the future.

The aforementioned Developer Community feedback item for Visual Studio 2022 is still "under investigation."

About the Author

David Ramel is an editor and writer at Converge 360.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube