# Resumo - Generative AI could “distort collective understanding of socio-political reality or scientific consensus,” and in many cases is already doing that, according to a new research paper from Google, one of the biggest companies in the world building, deploying, and promoting generative AI. - The paper, “Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data,” - But the most interesting observation in the paper is that the vast majority of these harms and how they “undermine public trust,” as the researchers say, are often “neither overtly malicious nor explicitly violate these tools’ content policies or terms of service.” In other words, that type of content is a feature, not a bug. - This observation lines up with the reporting we’ve done at 404 Media for the past year and prior. People who are using AI to impersonate others, sockpuppet, scale and amplify bad content, or create nonconsensual intimate images (NCII), are mostly not hacking or manipulating the generative AI tools they’re using. They’re using them as intended. # Minha Opinião Sobre Isso - Parece que as pessoas que usam [[Grandes modelos de linguagem]], como [[ChatGPT|ChatGPT]] e [[Google Gemini]], por exemplo, e outras ferramentas generativas não estão necessariamente indo contra as políticas de uso dessas ferramentas quando estão criando conteúdo falso, se passando por outras pessoas, fraude de identidade, escalando conteúdo, etc. - Isso me lembrou muito o artigo [[ChatGPT is bullshit]], que argumenta que [[Grandes modelos de linguagem]] são apenas geradores de bobagens, mesmo quando acertam suas produções.