Private Network Check Readiness - TeckNexus Solutions

Bloomberg AI Researchers Mitigate Risks of “Unsafe” RAG LLMs and GenAI in Finance

There's immense pressure for companies in every industry to adopt AI, but not everyone has the in-house expertise, tools, or resources to understand where and how to deploy AI responsibly. Bloomberg hopes this taxonomy – when combined with red teaming and guardrail systems – helps to responsibly enable the financial industry to develop safe and reliable GenAI systems, be compliant with evolving regulatory standards and expectations, as well as strengthen trust among clients.
Bloomberg AI Researchers Mitigate Risks of "Unsafe" RAG LLMs and GenAI in Finance

Two new academic papers reflect Bloomberg’s commitment to transparent, trustworthy, and responsible AI


From discovering that retrieval augmented generation (RAG)-based large language models (LLMs) are less “safe” to introducing an AI content risk taxonomy meeting the unique needs of GenAI systems in financial services, researchers across Bloomberg’s AI Engineering group, Data AI group, and CTO Office aim to help organizations deploy more trustworthy solutions.

They have published two new academic papers that have significant implications for how organizations deploy GenAI systems more safely and responsibly, particularly in high-stakes domains like capital markets financial services.

In RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models,” Bloomberg researchers found that RAG, a widely-used technique that integrates context from external data sources to enhance the accuracy of LLMs, can actually make models less “safe” and their outputs less reliable.

To determine whether RAG-based LLMs are safer than their non-RAG counterparts, the authors used more than 5,000 harmful questions to assess the safety profiles of 11 popular LLMs, including Claude-3.5-Sonnet, Llama-3-8B, Gemma-7B, and GPT-4o. Comparing the resulting behaviors across 16 safety categories, the findings demonstrate that there were large increases in unsafe responses under the RAG setting. In particular, they discovered that even very “safe” models, which refused to answer nearly all harmful queries in the non-RAG setting, become more vulnerable in the RAG setting [see Figure 3 from the paper].

The change of risk profile from non-RAG to RAG is model dependent. (Figure 3, RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models, arXiv, 2025.)

This research clearly underscores the need for anyone using RAG LLMs to assess whether their models have any hidden layers of vulnerability and what additional safeguards they might need to add.

“This counterintuitive finding has far-reaching implications given how ubiquitously RAG is used in GenAI applications such as customer support agents and question-answering systems. The average Internet user interacts with RAG-based systems daily,” explained Dr. Amanda Stent, Bloomberg’s Head of AI Strategy & Research in the Office of the CTO. “AI practitioners need to be thoughtful about how to use RAG responsibly, and what guardrails are in place to ensure outputs are appropriate. Our research offers a framework for approaching that so others can evaluate their own solutions and identify any potential blind spots.”

In a related paper, “Understanding and Mitigating Risks of Generative AI in Financial Services,” Bloomberg’s researchers examined how GenAI is being used in capital markets financial services and found that existing general purpose safety taxonomies and guardrail systems fail to account for domain-specific risks.

To close this gap, they introduced a new AI content risk taxonomy that meets the needs of real-world GenAI systems for financial services. It goes beyond what may be addressed by general-purpose safety taxonomies and guardrail systems by addressing risks specific to the financial sector such as confidential disclosure, counterfactual narrative, financial services impartiality, and financial services misconduct.

“There have been strides in academic research addressing toxicity, bias, fairness, and related safety issues for GenAI applications for a broad consumer audience, but there has been significantly less focus on GenAI in industry applications, particularly in financial services,” said David Rabinowitz, Technical Product Manager for AI Guardrails at Bloomberg.

[See Table 1 from the paper]

The categories in Bloomberg’s AI content safety taxonomy for financial services. (Table 1, Understanding and Mitigating Risks of Generative AI in Financial Services, 2025.)

“There’s immense pressure for companies in every industry to adopt AI, but not everyone has the in-house expertise, tools, or resources to understand where and how to deploy AI responsibly,” said Dr. Sebastian Gehrmann, Bloomberg’s Head of Responsible AI. “Bloomberg hopes this taxonomy – when combined with red teaming and guardrail systems – helps to responsibly enable the financial industry to develop safe and reliable GenAI systems, be compliant with evolving regulatory standards and expectations, as well as strengthen trust among clients.”

The RAG safety paper will be presented at the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL 2025) in Albuquerque, New Mexico later this week. The AI risk taxonomy paper will be presented at the ACM Conference on Fairness, Accountability, and Transparency (FAccT) in Athens, Greece in June. For more details, read the Tech At Bloomberg blog post and both papers:

About AI at Bloomberg
Since 2009, Bloomberg has been building and using artificial intelligence (AI) in the finance domain – including machine learning (ML), natural language processing (NLP), information retrieval (IR), time-series analysis, and generative models – to help process and organize the ever-increasing volume of structured and unstructured financial information. With this technology, Bloomberg is developing new ways for financial professionals and business leaders to derive valuable intelligence and actionable insights from high-quality financial information and make more informed business decisions. Learn more about Bloomberg’s AI solutions at www.bloomberg.com/AIatBloomberg.

About Bloomberg
Bloomberg is a global leader in business and financial information, delivering trusted data, news, and insights that bring transparency, efficiency, and fairness to markets. The company helps connect influential communities across the global financial ecosystem via reliable technology solutions that enable our customers to make more informed decisions and foster better collaboration. For more information, visit Bloomberg.com/company or request a demo.


Recent Content

Beijing’s first World Humanoid Robot Games is more than a spectacle. It is a live systems trial for embodied AI, connectivity, and edge operations at scale. Over three days at the Beijing National Speed Skating Oval, more than 500 humanoid robots from roughly 280 teams representing 16 countries are competing in 26 events that span athletics and applied tasks, from soccer and boxing to medicine sorting and venue cleanup. The games double as a staging ground for 5G-Advanced (5G-A) capabilities designed for uplink-intensive, low-latency, high-reliability robotics traffic. Indoors, a digital system with 300 MHz of spectrum delivers multi-Gbps peaks and sustains uplink above 100 Mbps.
Infosys will acquire a 75% stake in Telstra’s Versent Group for approximately $153 million to launch an AI-led cloud and digital joint venture aimed at Australian enterprises and public sector agencies. Infosys will hold operational control with 75% ownership, while Telstra retains a 25% minority stake. The JV blends Telstra’s connectivity footprint, Versents local engineering depth and Infosys global scale and AI stack. With Topaz and Cobalt, Infosys can pair model development and orchestration with landing zones, FinOps, and MLOps on major hyperscaler platforms. Closing is expected in the second half of FY 2026, subject to regulatory approvals and customary conditions.
New data shows AI-native startups hitting ARR milestones faster than cloud cohorts, reshaping SaaS and telecom with agents, memory and 2025 priorities.
Deutsche Telekom is using hardware, pricing, and partnerships to make AI a mainstream feature set across mass-market smartphones and tablets. Deutsche Telekom introduced the T Phone 3 and T Tablet 2, branded as the AI-phone and AI-tablet, with Perplexity as the embedded assistant and a dedicated magenta button for instant access. In Germany, the AI-phone starts at 149 and the AI-tablet at 199, or one euro each when bundled with a tariff, positioning AI features at entry-level price points and shifting value to services and connectivity. The bundle includes an 18-month Perplexity Pro subscription in addition to the embedded assistant, plus three months of Picsart Pro with monthly credits, which lowers the barrier to adopting AI-powered creation and search.
Zayo has secured creditor backing to push major debt maturities to 2030, creating headroom to fund network expansion as AI-driven demand accelerates. Zayo entered into a transaction support agreement dated July 22, 2025, with holders of more than 95% of its term loans, secured notes, and unsecured notes to amend terms and extend maturities to 2030. By extending maturities, Zayo lowers refinancing risk in a higher-for-longer rate environment and preserves cash for growth capex. The move aligns with its pending $4.25 billion acquisition of Crown Castle Fibers assets and follows years of heavy investment in fiber infrastructure.
An unsolicited offer from Perplexity to acquire Googles Chrome raises immediate questions about antitrust remedies, AI distribution, and who controls the internets primary access point. Perplexity has proposed a $34.5 billion cash acquisition of Chrome and says backers are lined up to fund the deal despite the startups significantly smaller balance sheet and an estimated $18 billion valuation in recent fundraising. The bid includes commitments to keep Chromium open source, invest an additional $3 billion in the codebase, and preserve current user defaults including leaving Google as the default search engine. The timing aligns with a U.S. Department of Justice push for structural remedies after a court found Google maintained an illegal search monopoly, with a Chrome divestiture floated as a central remedy.
Whitepaper
Telecom networks are facing unprecedented complexity with 5G, IoT, and cloud services. Traditional service assurance methods are becoming obsolete, making AI-driven, real-time analytics essential for competitive advantage. This independent industry whitepaper explores how DPUs, GPUs, and Generative AI (GenAI) are enabling predictive automation, reducing operational costs, and improving service quality....
Whitepaper
Explore the collaboration between Purdue Research Foundation, Purdue University, Ericsson, and Saab at the Aviation Innovation Hub. Discover how private 5G networks, real-time analytics, and sustainable innovations are shaping the "Airport of the Future" for a smarter, safer, and greener aviation industry....
Article & Insights
This article explores the deployment of 5G NR Transparent Non-Terrestrial Networks (NTNs), detailing the architecture's advantages and challenges. It highlights how this "bent-pipe" NTN approach integrates ground-based gNodeB components with NGSO satellite constellations to expand global connectivity. Key challenges like moving beam management, interference mitigation, and latency are discussed, underscoring...

Download Magazine

With Subscription

Subscribe To Our Newsletter

Private Network Awards 2025 - TeckNexus
Scroll to Top

Private Network Awards

Recognizing excellence in 5G, LTE, CBRS, and connected industries. Nominate your project and gain industry-wide recognition.
Early Bird Deadline: Sept 5, 2025 | Final Deadline: Sept 30, 2025