Microsoft Warns Users to Fact-Check AI Output

Microsoft Warns Users to Fact-Check AI Output

Source: MIT Technology Review

Summary

AI companies are warning users not to unthinkingly trust the outputs of their models, according to their terms of service. This warning is not coming from AI skeptics, but from the companies themselves. The terms of service for AI models often include disclaimers about the potential for inaccuracies or biases in the output.


Our Reading

The announcement sounds ambitious. AI companies are now warning users to fact-check their AI-generated content. Because, you know, trust but verify. Especially when it comes to AI. Who would have thought? The usual suspects are still touting AI as the solution to all problems. Meanwhile, their lawyers are covering their bases.

It’s not like we haven’t seen this movie before. Companies promising the moon and delivering a slightly better version of what we already had. The cycle repeats itself. New features, same old caveats.


Author: Evan Null