- Tech News & Insight
- March 23, 2025
- Hema Kadia
Selective transparency in open-source AI is creating a false sense of openness. Many companies, like Meta, release only partial model details while branding their AI as open-source. This article dives into the risks of such practices, including erosion of trust, ethical lapses, and hindered innovation. Examples like LAION 5B and Metaโs Llama 3 show why true openness โ including training data and configuration โ is essential for responsible, collaborative AI development.