News
8 May 2026, 09:32
ASIC pushes brokers to boost cyber defenses against frontier AI risks

The Australian Securities and Investments Commission (ASIC) warns financial firms and market participants to step up cybersecurity protections as artificial intelligence continues to amplify cyber threats globally. It maintained that, while cyber threats have always been a concern, sophisticated AI tools like Claude Mythos could dramatically accelerate the discovery and exploitation of vulnerabilities. In an open letter, the regulator advised companies to secure their systems against AI-accelerated risks now rather than depend on future AI tools. Primarily, it advocated a technology-neutral, principles-driven approach to the urgently needed cyber upgrades. What does the ASIC expect from licensees across the country? Frontier AI has pushed cyber risk into a “new era,” cautioned ASIC Commissioner Simone Constant. She noted that, despite the potential perks of advanced AI models, they can still exploit vulnerabilities much faster than most anticipate. That means isolated gaps can now cause a total system collapse, with average attackers gaining access to high-level hacking techniques. This communication follows evidence from Connective that brokers are integrating AI tools without the necessary defensive frameworks. Connective chief executive Glenn Lees contended that the broker industry is currently buzzing with AI excitement but lacks the structure needed for secure, steady deployment. Nonetheless, he urged brokers to build a solid foundation of strategy, systems, and governance, asserting that this is probably the only way to make AI adoption work. ASIC’s open letter also asked licensees to address their security gaps now, rather than waiting to see how AI threats evolve. Constant explained that a ready-to-go response plan is essential, since the basic rules of cyber safety don’t change just because the technology does. She added that top-level management must take ownership, ensuring that rigorous testing and early remediation happen well before a threat becomes a crisis. She further commented, “The clock is at a minute to midnight – if you aren’t on top of your cyber resilience already, the time to act and prepare is right now.” Additionally, aside from the ASIC, the Australian Prudential Regulation Authority (APRA) cautioned banks that their governance and control measures for artificial intelligence are lagging behind the rapid expansion of AI tools . APRA member Therese McCarthy Hockey stated: “The AI revolution presents tremendous opportunities for banks, insurers, and superannuation trustees to deliver improved efficiency and enhanced customer services. But we cannot be blind to the risks of such powerful technology.” ASIC took action against FIIG Securities The ASIC recently moved against Australian fixed-income specialist FIIG Securities Limited (FIIG) for failing to implement proper cyber safeguards for its massive client base for years. Consequently, the firm was directed to pay pecuniary penalties totaling $2.5 million and about $500,000 towards ASIC’s costs. Reportedly, FIIG’s security weaknesses played a role in the scale of a 2023 cyber breach that exposed confidential data, including tax file numbers, bank account details, and identification documents. About 18,000 clients received notice that their sensitive personal details may have been leaked. At the time, the FIIG even conceded that its cybersecurity arrangements were inadequate under its Australian Financial Services (AFS) license requirements and that better safeguards may have reduced the impact of the breach. By their own admission, the company also failed to follow its own policies designed to prevent exactly this kind of data leak. The Federal Court also mandated an independent audit to bring its cyber resilience up to a professional standard. Following the case’s outcome, ASIC Deputy Chair Sarah Court even commented, saying, “ASIC expects financial services licensees to be on the front foot every day to protect their clients. FIIG wasn’t – and they put thousands of clients at risk. In this case, the consequences far exceeded what it would have cost FIIG to implement adequate controls in the first place.” Still letting the bank keep the best part? Watch our free video on being your own bank .
8 May 2026, 08:06
LayerZero Risks Escalate as Developers Push Security Debate

Security researcher Banteg ignited a debate as he highlighted LayerZero’s default multisig setup which exposed billions in OFT (Omnichain Fungible Token) assets to potential compromise. His research also showed that LayerZero’s default setup created major security risks for many connected projects. The controversy pushed several protocols to improve security or move to safer alternatives like Chainlink CCIP. A heated debate broke out in the ETHSecurity Community Telegram Group between LayerZero’s Bryan Pellegrino (co-founder and CEO of LayerZero) and security researchers. The debate was about a default library contract that LayerZero Labs could upgrade without a timelock, putting more than $3 billion in LayerZero Omnichain Fungible Tokens (LZ OFTs) at risk of compromise similar to the recent rsETH hack. The Spark: Vulnerable Default Library Exposed Security researcher highlighted the fact that LayerZero’s default library contract allowed the team to make instant upgrades that too without any delay mechanism like a timelock. With this setup, the team members could forge a cross-chain message which could mimic the rsETH exploit where attackers drained funds by faking verifications. Projects such as Ethena and EtherFi were using this default library just weeks ago, according to researcher Banteg. Even now, onchain data shows $178 million in value from various projects remains exposed to this risk if LayerZero Labs’ control is abused. Yearn developer Banteg intensified the whole thing after he warned that many protocols were still dangerously dependent on LayerZero’s default 3-of-5 multisig setup. He argued that projects relying on the default receive library without stronger protections were exposing themselves to unnecessary risk, as any compromise of LayerZero’s multisig could allow attackers to drain connected adapters instantly. Following the Kelp exploit, Banteg estimated that vulnerable adapters initially represented around $3.13 billion in potential exposure, though that figure later dropped significantly after some projects hardened their configurations. Despite this progress, he stressed that many protocols still remained vulnerable. By publishing exact technical guidance for the security of these integrations, Banteg shifted the debate from theory to actionable risk, reigniting concerns over LayerZero’s centralized dependencies. LayerZero does not need to act maliciously for danger to arise, any compromise of their systems could lead to a supply chain attack on all dependent projects. This mirrors past audits flagging similar trusted-part risks in LayerZero’s Endpoint and UltraLightNode contracts. Multisig Signers Caught in High-Risk Activities Onchain evidence showed that LayerZero’s Labs’ production multisig signers, something that is meant to secure billions, were used for risky personal activities. These included trading the memecoin McPepes (PEPES) on Uniswap, DEX swaps, and bridging assets, exposing keys to phishing sites. Zach Rynes, a Chainlink community figure, called it out on X (formerly known as Twitter). He labeled it a total failure of basic opsec and key isolation, raising supply chain attack fears. LayerZero’s Bryan claimed they were testing “PEPE’s OFT integration,” but critics noted that PEPE was not even deployed yet, and McPepes is a different token altogether. This poor handling of production keys explains their prior North Korea hack vulnerability, where Lazarus Group targeted them through compromised RCPs. LayerZero’s History of Security Issues LayerZero Labs has faced repeated scrutiny for opsec lapses. North Korea hackers managed to infiltrate their infrastructure, spoofing RPC data in the KelpDAO rsETH exploit that stole $290-292 million, which LayerZero blamed on Kelp’s single DVN setup . Past reports like ZeroValidation detailed multisig exploits allowing arbitrary messages without any proper sign-off, pojects migrating away cite these as signs of centralized risks spreading to user funds. The rsETH hack showed how weak configs amplify dangers, with LayerZero halting signatures for singles-verifier apps post-incident. Critics argue defaults push users into risky paths without clear warnings. Bryan vs Researchers: Clash in Telegram In the ETHSecurity Telegram debate, Bryan defended LayerZero, but researchers pushed back on the library risks and multisig misuse. They stressed that production keys connected to DEXs and memecoin trades scream phishing bait, especially post-North Korea breach. Bryan dismissed some claims, but the group highlighted $3B+ OFT exposure. Influencer Backlash and Project Shifts Another crypto influencer Ed posted on X and argued that the protocol’s defenders overlooked a major issue, its own centralized infrastructure had been compromised. KelpDAO, after the April 18 LayerZero-linked exploit, announced its migration of rsETH to Chainlink CCIP over concerns about infrastructure security and unanswered ecosystem questions. Solv protocol has now followed with an even larger transition. The protocol is moving more than $700 million SolvBTC and xSolvBTC ecosystem away from LayerZero bridges after the security review. Together, these back-to-back migrations highlight a growing industry shift, as major protocols increasingly prioritize stronger security guarantees, proactive monitoring and institutional-grade cross-chain infrastructure. These migrations suggest growing preference for more secure cross-chain solutions, with Chainlink gaining almost $1 billion in assets. Industry voices like Yearn’s Banteg and Zach Rynes also backed concerns around LayerZero, pushing for stronger security standards. Broader Implications for Cross-Chain Security LayerZero’s OFT (Omnichain Fungible Token) standard powers billions of dollars in cross-chain token transfers by using a burn-and-mint system, where tokens are burned on one chain and recreated on another. While this model has helped many projects scale across blockchains, its default security setup has raised serious concerns. In many cases, protection depends heavily on LayerZero Labs’ multisig infrastructure, meaning a small group of key holders can control critical operations. If these keys are exposed or internal systems are compromised, user funds and protocol security could be at risk. Security experts have also pointed out that some of LayerZero’s libraries lack stronger upgrade protections or decentralized safeguards, which weakens trust in its modular bridge design. As a result, several projects are now reconsidering their reliance on LayerZero and moving toward alternatives like Chainlink CCIP, which are increasingly viewed as more secure. This shift highlights a bigger lesson for the crypto industry: strong code alone is not enough. Protocols also need better operational security, including timelocks, isolated key management, and multiple independent verifiers by default. For users, the real danger usually comes not just from smart contract bugs, but from centralized infrastructure and poor security practices behind the scenes. Also Read: $770M in Crypto Exploits Fuels Concerns Over AI-Powered DeFi Threats
8 May 2026, 05:00
DeFi Platform TrustedVolumes Hit By $6.7M Hack As 2026 Exploits Surge

Another multi-million-dollar attack has hit the DeFi sector after liquidity provider and market maker TrustedVolumes fell victim to a smart contract exploit on Thursday night. Related Reading: Solana Eyes New Leg Up After Triangle Breakout – Is $96 The Next Stop? TrustedVolumes Hit By $6.7M Hack On Thursday, DeFi platform TrustedVolumes, one of 1inch liquidity providers and market makers, suffered a new exploit that drained millions of dollars in multiple assets from the project. According to reports from blockchain security firms PeckShield and Blockaid, the attacker stole approximately $6 million in Wrapped Ethereum (WETH), Wrapped Bitcoin (WBTC), USDT, and USDT after exploiting a vulnerability in the protocol’s core signature validation logic, which allowed them to bypass authorization checks and forge trading orders. Notably, the hacker quickly exchanged all assets for 2.513 ETH on a Decentralized Exchange (DEX) and distributed them across three addresses. In an X post, TrustedVolumes confirmed the incident, sharing the addresses currently holding the stolen funds and updating the estimated loss to roughly $6.7 million. The vulnerability was a TrustedVolumes-controlled custom RFQ (request for quote) swap proxy. Crypto researcher Humphrey explained that “the Custom RFQ Swap Proxy contract contains a function designed to manage the ‘authorized order signer’ whitelist. Such whitelist mechanisms are common in DeFi—only addresses on the whitelist can issue valid transaction instructions on behalf of the protocol.” However, he noted that “this registration function is public and lacks any permission modifiers.” As a result, the attacker exploited this public function within the contract, registering themselves as an authorized order signer. “Since any external address can call this function, it is equivalent to giving everyone the ability to make a copy of the safe’s key,” the researcher continued. Same Hacker, Different Attack The online reports revealed that the attacker was the same hacker responsible for the $5 million 1inch Fusion V1 Settlement contract exploit in March 2025, which TrustedVolumes was the primary victim. Humprey highlighted that while the same individual carried out both attacks, they were significantly different on a technical level. According to the post, the 2025 vulnerability involved low-level EVM memory manipulation in the 1inch Fusion V1 Settlement contract. At the time, the hacker “proactively initiated on-chain negotiations,” offering to return the stolen assets for a white hat bounty. The DeFi platform accepted the proposal, and most of the funds were safely returned. Now, TrustedVolumes affirmed that it is “open to constructive communication regarding a bug bounty and a mutually acceptable resolution.” Decentralized exchange aggregator 1inch clarified that there was no impact on its systems, infrastructure, or user funds, explaining that “TrustedVolumes operate independently as a liquidity provider, used by multiple protocols across the industry, and are not exclusive to 1inch.” DeFi Exploits See Historic Surge This attack follows a wave of exploits that has shaken the DeFi sector over the past month. Last week, PeckShield revealed that the crypto space saw 40 major hacks in April, which drained approximately $647 million. Related Reading: $150M Crypto Ponzi Crumbles: $41.5M Frozen In DSJ Exchange Collapse This figure represents a 1,140% Month-over-Month (MoM) increase from March’s $52.2 million. It also represents a 292% surge from the $165 million the DeFi sector lost during the first quarter of 2026. Notably, the top two incidents of the month, Drift Protocol’s $285 million and KelpDAO’s $290 million exploits, accounted for 91% of the funds lost last month. In addition, they now rank among the Top 10 hacks since 2021. Featured Image from Unsplash.com, Chart from TradingView.com
8 May 2026, 03:40
Arbitrum Council Approves Unfreezing $71M in ETH From Kelp DAO Exploit

BitcoinWorld Arbitrum Council Approves Unfreezing $71M in ETH From Kelp DAO Exploit The Arbitrum Security Council has approved a joint proposal to unfreeze approximately $71 million in ETH that was locked following an exploit on the Kelp DAO protocol. The decision is expected to accelerate the recovery of rsETH collateral and restore liquidity for affected users. Background of the Exploit and Freeze In early 2025, an exploit targeting Kelp DAO, a liquid restaking protocol, led to the freezing of a significant amount of ETH by the Arbitrum Security Council. The council, which acts as a safety mechanism for the Arbitrum ecosystem, intervened to prevent further loss and to allow time for investigation and remediation. The frozen funds, valued at around $71 million at current market rates, were held in a smart contract while stakeholders worked on a recovery plan. Joint Proposal and Approval Process The successful proposal was jointly submitted by three key entities: Aave Labs, the development team behind the Aave lending protocol; Kelp DAO, the affected protocol; and LayerZero, the cross-chain interoperability platform. The collaboration was necessary because the frozen funds were intertwined across multiple protocols and layers, requiring coordinated action to safely unfreeze and redistribute them. The proposal underwent a standard governance process, including a voting period and technical review, before receiving final approval from the Arbitrum Security Council. The council’s decision was based on the thoroughness of the recovery plan and the assurance that the exploit vector had been addressed. Impact on rsETH Collateral and Users The unfreezing of these funds is a critical step in restoring the rsETH collateral pool. rsETH is a liquid restaking token that represents staked ETH on the EigenLayer ecosystem. The exploit had temporarily destabilized the collateral backing, causing uncertainty for users who had deposited ETH in exchange for rsETH. With the funds now being released, Kelp DAO can begin the process of rebalancing its reserves and resuming normal operations. This move is expected to restore confidence among liquidity providers and borrowers who rely on the stability of rsETH. Broader Implications for DeFi Security This incident highlights the importance of security councils and rapid response mechanisms in decentralized finance. The Arbitrum Security Council’s ability to freeze and later unfreeze funds, with proper governance, demonstrates a balanced approach between security and decentralization. However, it also raises questions about the centralization of power in such councils, even if temporary. The joint proposal model, involving affected protocols and infrastructure providers, could become a template for handling future cross-protocol incidents. Conclusion The approval to unfreeze $71 million in ETH marks a positive resolution to a significant DeFi exploit. The coordinated effort between Aave Labs, Kelp DAO, and LayerZero, combined with the decisive action of the Arbitrum Security Council, has set a precedent for how the ecosystem can manage and recover from security incidents. Users and stakeholders will now watch closely as the funds are redistributed and normal operations resume. FAQs Q1: What was the Kelp DAO exploit? The Kelp DAO exploit was a security breach that targeted the protocol’s smart contracts, leading to the freezing of approximately $71 million in ETH by the Arbitrum Security Council to prevent further losses. Q2: Who submitted the proposal to unfreeze the funds? The proposal was jointly submitted by Aave Labs, Kelp DAO, and LayerZero, representing a collaborative effort between the affected protocol, a major lending platform, and a cross-chain infrastructure provider. Q3: What is rsETH and why is this important? rsETH is a liquid restaking token on the EigenLayer ecosystem. Unfreezing the funds is crucial for restoring the collateral backing of rsETH, ensuring stability for users who have deposited ETH in exchange for the token. This post Arbitrum Council Approves Unfreezing $71M in ETH From Kelp DAO Exploit first appeared on BitcoinWorld .
8 May 2026, 01:24
ChatGPT adds emergency contact feature as 33 deaths pile up

OpenAI rolled out Trusted Contact on Wednesday. The feature lets adult ChatGPT users pick someone to get an alert if the company’s systems flag a conversation about serious self-harm. It’s an expansion of the parental controls OpenAI launched in September 2025, which let parents monitor their teens’ accounts. Now anyone 18 or older can opt in, per OpenAI’s announcement. How OpenAI’s alerts actually work The user starts by adding one adult as their Trusted Contact in ChatGPT settings. The potential “trusted contact” gets an invitation explaining the setup and have a week to accept. If they pass, the user picks someone else. When automated monitoring spots a potential self-harm conversation, ChatGPT tells the user it might notify their contact. It also suggests ways for the user to reach out themselves. Then a team of human reviewers looks at the conversation. If they confirm it’s serious, they send a short alert to the user’s contact by email, text, or in-app ping. The alert doesn’t include what the user said. Just the general reason and a link to guidance on how to talk through tough stuff. OpenAI says human review wraps up within an hour. The user can swap or remove their selected contact whenever. The contact can bail out on their end too. Doctors helped build OpenAI’s Trusted Contact feature OpenAI says it worked with its Global Physicians Network (260-plus licensed doctors in 60 countries) and its Expert Council on Well-Being and AI. The American Psychological Association weighed in as well. “Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress,” Dr. Arthur Evans, CEO of the American Psychological Association, said in the announcement. “Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most.” Dr. Munmun De Choudhury, a Georgia Tech professor and council member, called it “a step forward to human empowerment, especially during moments of vulnerability.” OpenAI faces pressure from AI suicide lawsuits The timing isn’t random. OpenAI is staring down a stack of lawsuits from families whose relatives died by suicide after long ChatGPT sessions. In several cases, families claim the chatbot told users to pull away from loved ones or doubled down on harmful thought loops. LLMDeathCount , a site tracking AI chatbot-related deaths, lists 33 cases from March 2023 to May 2026. Victims ranged from 13 to 83 years old, per Cryptopolitan’s earlier coverage . ChatGPT accounts for 24 of those. Google’s Gemini, Meta and other platforms make up the rest. OpenAI’s new feature is opt-in, and users can run multiple ChatGPT accounts. Anyone who doesn’t turn on Trusted Contact, or who just logs into a different account, sidesteps the whole thing. Same issue with the parental controls. Trusted Contact also doesn’t replace crisis hotlines. ChatGPT still surfaces local crisis numbers and pushes users toward emergency services when conversations hit acute distress levels, according to OpenAI. OpenAI’s Trusted Contact feature links AI users with real-world support. The company said it’ll keep working with clinicians, researchers, and policymakers on how AI should respond when users might be in crisis. Your bank is using your money. You’re getting the scraps. Watch our free video on becoming your own bank
7 May 2026, 21:55
AI free-trial abuse is becoming a costly problem for startups, Stripe says

AI startups are increasingly struggling with a type of fraud that barely existed a few years ago: automated users signing up in bulk to drain expensive computing resources before companies can stop them. Stripe Chief Executive Patrick Collison said the problem has become widespread among AI firms using the company’s payment infrastructure. Speaking on the TBPN podcast, Collison said roughly one in six new accounts created on some AI platforms now appears to be fraudulent. The abuse centers on inference tokens, the computing credits required to run AI models. Fraudsters create fake accounts, consume the free allocations offered to new users, then disappear without paying. In some cases, access is reportedly resold through online channels that distribute low-cost AI credentials. Fortune reported details from Stripe executives on May 7. Strip’s Collison warns AI companies are facing a new type of fraud The issue is hitting startups particularly hard because AI products carry real usage costs from the moment someone begins interacting with a model. Unlike traditional software companies, AI firms cannot onboard millions of free users without paying for the underlying compute power needed to process prompts and generate responses. Emily Sands, Stripe’s Head of Data and AI, said some attackers are operating at speeds that make manual fraud reviews ineffective. “One of the things that’s really scary about that is that these attackers can burn inference costs, can rack up massive usage bills that they never intend to pay, and they can do that very, very quickly because they are consuming tokens at machine speed,” Sands told Fortune. According to Sands, abuse involving AI free trials has more than doubled over the past six months. Researchers tracking AI security vulnerabilities say the attacks often exploit weak credential controls rather than sophisticated hacking techniques. Many AI systems still rely on broad API permissions that allow automated agents to access large portions of backend infrastructure once credentials are obtained. A March 2026 report from security research firm Grantex found that most leading open-source AI agent projects lacked granular identity separation between agents, making it difficult to isolate compromised accounts without rotating entire system credentials. The broader market for stolen credentials is also expanding. Cybersecurity company SpyCloud said it recovered 18.1 million exposed API keys and machine credentials from criminal marketplaces in 2025, including millions tied to AI-related services. Some startups are beginning to change how they handle user acquisition Some startups are already changing how they handle user acquisition because of the rising costs. Industry executives say companies that once relied heavily on free trials are now shortening trial periods, imposing stricter rate limits, or requiring payment details earlier in the signup process. Stripe said it has expanded its Radar fraud-detection system to evaluate AI account registrations using indicators such as device fingerprints, IP reputation, and email-domain history. The company said the system blocked more than 3.3 million potentially risky signups across eight AI companies during the past month. The company is also exploring payment systems designed to reduce unpaid usage altogether. Stripe has backed a blockchain-based project called Tempo that would allow AI services to charge customers continuously as compute resources are consumed. Crypto exchange Coinbase is developing a similar system known as x402 , focused on real-time payments between applications and APIs. Supporters of the approach believe instant settlement could reduce fraud exposure by removing the delay between resource consumption and payment collection. Even so, security analysts say the problem reflects a broader tension inside the AI industry: startups are racing to grow as quickly as possible while many of the underlying security and identity systems remain immature. If you're reading this, you’re already ahead. Stay there with our newsletter .










































