News
22 Apr 2026, 06:38
ICP price prediction 2026-2032: Is ICP a good investment?

Key takeaways: ICP is expected to attain a maximum price of $4.57 in 2026. Internet Computer protocol price forecast for 2029 expects the token to reach a peak price of $12.20. By 2032, the price of Internet Computer might reach a maximum of $19.82. Internet Computer (ICP) is a groundbreaking blockchain network developed by the DFINITY Foundation. It aims to extend the functionality of the internet, enabling it to host backend software and transforming it into a global, decentralized computer. Internet computer blockchain incorporates advanced cryptography and innovative technology to provide scalable, efficient, and secure decentralized applications (dApps). Given its robust technology and expanding utility, the Internet Computer blockchain’s future price prospects look promising. As more developers build on the platform and adoption increases, ICP token demand will likely rise. Does Internet Computer coin have a future? How much will Internet Computer coin cost in 2026? Will ICP reach $1000? Let’s get into the current price analysis and predictions. Overview Cryptocurrency Internet Computer Token ICP Price $2.54 Market Cap $1.383B Trading Volume $36.409M Circulating Supply 551.89M ICP All-time High $750.73 (May 10, 2021) All-time Low $2.02 (Feb 24, 2026) 24-h High $2.51 24-h Low $2.41 Internet Computer Network technical analysis Metric Value Volatility (30-day period) 4.83% (Medium) 14-Day RSI 50.32 (Neutral) 50-Day SMA $2.45 Sentiment Bearish Fear & Greed Index 33 (Fear) Green Days 13/30 (43%) 200-Day SMA $3.28 Internet Computer price analysis TL;DR Breakdown ICP bounced ~15% and reclaimed key support at $2.45. The 4-hour momentum is bullish and still building. A break above $2.66 would signal continuation to the upside, whereas failure to do so would likely result in a pullback toward $2.45. ICP 1-day price analysis As of April 22, ICP is trading around $2.52, rebounding roughly 12–15% from the recent low near $2.25, signaling a solid recovery phase after the prior downtrend. Price has reclaimed the mid Bollinger Band at $2.45, which now acts as support, confirming an improving structure. ICPUSDT 1-day price chart by TradingView However, price is now approaching the upper band near $2.66, where rejection has already started to appear. Recent candles show wicks and hesitation, suggesting supply is entering. MACD remains bullish with a positive histogram, but momentum is beginning to flatten slightly, indicating the move is losing some strength. If ICP breaks above $2.66, continuation toward $2.75–$2.80 becomes likely. Failure here would likely result in a pullback toward $2.45, with $2.25 as the deeper support if selling accelerates. ICP 4-hour price analysis Over the 4-hour period, ICP has pushed from ~$2.40 to $2.52, a ~5% move, and is now trading above the Alligator lines, which are starting to align bullishly. This suggests short-term trend control has shifted to buyers. ICPUSDT 4-hour price chart by TradingView The recent impulsive move followed by a small consolidation shows healthy structure, not exhaustion yet. MACD has crossed bullish again and is expanding, supporting continuation. Unlike the daily, momentum here is still building rather than fading. Key level to watch is $2.48–$2.50. Holding this keeps the bullish structure intact and opens the door to another push toward $2.60–$2.66. Losing it would weaken momentum and likely send the price back toward $2.40. ICP technical indicators: Levels and action Daily simple moving average (SMA) Period Value Action SMA 3 $2.45 BUY SMA 5 $2.53 BUY SMA 10 $2.50 BUY SMA 21 $2.42 BUY SMA 50 $2.45 BUY SMA 100 $2.66 SELL SMA 200 $3.28 SELL Daily exponential moving average (EMA) Period Value Action EMA 3 $2.47 BUY EMA 5 $2.48 BUY EMA 10 $2.48 BUY EMA 21 $2.46 BUY EMA 50 $2.48 BUY EMA 100 $2.70 SELL EMA 200 $3.29 SELL What to expect from ICP price analysis ICP is currently in a recovery trend with strengthening short-term bullish momentum, but it is approaching a strong resistance zone around $2.66. Unless this level is decisively broken, a pullback is likely. Is Internet Computer a good investment? The Internet Computer (ICP) has shown significant potential and volatility since its launch, which is common for relatively new and ambitious blockchain projects. Its technology aims to decentralize the internet and bring smart contract functionality to the web, which could have wide-ranging implications for the future of web speed. However, the market performance of ICP has been highly volatile, and its success depends heavily on the adoption of its technology and the broader market environment for cryptocurrencies. Please note that before you make an investment decision, seek independent professional consultation. Will Internet Computer reach $50? Yes, Internet Computer is expected to reach $50. Though the current internet computer sentiment is sideways, future price movements and market cap are expected to be positive. Will ICP reach $1000? Although its ATH sits at $750.73, attaining $1000 in the foreseeable future might be impossible. ICP is down 99% from its ATH and will require a massive turnaround in market fortunes to recapture previous highs. However, current price levels provide a good buying opportunity. Where can I buy Internet Computer? You can buy Internet Computer on the crypto market via Binance, Bybit, Coinbase Exchange, OKX, KuCoin, and more . Does Internet Computer have a good long-term future? Yes, the Internet Computer coin shows a promising long-term future. Price predictions indicate steady growth, with a potential increase year-on-year, reflecting a positive trend and strong market potential. Recent news/opinion on ICP Borrowing against Bitcoin – without bridging or wrapping – is now possible on ICP. Borrowing against Bitcoin – without bridging or wrapping – is now possible on ICP. @LiquidiumFi just made it happen. In this walkthrough, Robin (CEO) shows exactly how to borrow USDT on Ethereum using native Bitcoin as collateral, step by step. This is what native… — DFINITY Foundation (@dfinity) April 8, 2026 ICP is the most used blockchain in Web3 🌐 The most used blockchains in Web3 These networks have processed billions of transactions on mainnet, driving real on-chain usage around the world Here are the 25 busiest blockchains by total lifetime transactions 📊 https://t.co/H0EufFEDl3 pic.twitter.com/yM3etU3NHY — Chainspect (@chainspect_app) March 10, 2026 Internet Computer price prediction April 2026 In April 2026, ICP (Internet Computer) is expected to see a price range with a minimum of $2.25, an average of $2.55, and a maximum of $2.86. Month Minimum price Average price Maximum price ICP price prediction April 2026 $2.25 $2.55 $2.86 Internet Computer price prediction 2026 For 2026, ICP’s price is projected to range between a minimum of $2.50 and a maximum of $5.89, with an average estimate of $4.03. Year Minimum price Average price Maximum price ICP price prediction 2026 $2.05 $3.81 $4.57 Internet Computer price predictions 2027 – 2032 Year Minimum Price ($) Average Price ($) Maximum Price ($) 2027 $4.59 $6.35 $7.11 2028 $7.13 $8.89 $9.66 2029 $9.67 $11.43 $12.20 2030 $10.21 $12.98 $14.74 2031 $13.75 $15.52 $17.28 2032 $15.30 $17.06 $19.82 Internet Computer price forecast 2027 Projections suggest that in 2027, the Internet Computer (ICP) coin could peak at $7.11, with a minimum forecast of $4.59 and an average price of around $6.35. Internet Computer token price prediction 2028 In 2028, ICP could potentially reach a high of $9.66, with a projected low of around $7.13 and an average trading price of approximately $8.89. Internet Computer ICP price prediction 2029 The 2029 forecast indicates that ICP could reach up to $12.20, with an average price of $11.43 and a minimum expected around $9.67. Internet Computer ICP price prediction 2030 In 2030, ICP is expected to fluctuate between $10.21 and $14.74, with an average projected price of $12.98. Internet Computer ICP price prediction 2031 Predictions suggest that the price of ICP could potentially reach a peak of $17.28 by 2031, with a projected minimum of around $13.75 and an average of approximately $15.52. Internet Computer price prediction 2032 In 2032, analysts suggest a maximum price of $19.82 for ICP. Traders and investors can anticipate an average price of $17.06 and a minimum price of $15.30. Internet Computer ICP price prediction 2026 – 2032 Internet Computer market price prediction: Analysts’ ICP price forecast Firm Name 2026 2027 Changelly $4.90 $2.57 Digitalcoinprice $8.86 $6.90 Coincodex $2.62 $2.49 Cryptopolitan’s Internet Computer (ICP) price prediction Cryptopolitan’s Internet Computer prediction showcases a gradual upward trajectory. In 2026, ICP is forecasted to range between $2 and $6, averaging around $3.5. Subsequent years show increasing potential, with projections for 2027 aiming at a maximum of $7.81 and averaging $5.20. By 2032, Cryptopolitan anticipates ICP could peak at $20, with an average price of around $14. Internet Computer historic price sentiment ICP price history by Coingecko ICP began trading in June at $49.75. It peaked at $128.43 from June to August and dropped to $37.61. It fluctuated between $39.53 and $45.15 from September to November, ending November at $38.18. From December to February 2022, it ranged from $18.14 to $24.64. From March to August 2022, ICP declined significantly from $14.55 to $5.66. Between September and November, it continued to drop, ending at $3.52 in November. From March to November 2023, ICP prices fluctuated between $2.88 and $6.49, ending November at $3.77. From December 2023 to February 2024, ICP rose to $12.58 before closing February at $10.56. Between March and May, it ranged from $10.70 to $13.98, ending May at $11.21. June to August saw fluctuations between $5.88 and $13.00, while September traded around $9.55–$9.98. ICP peaked at $8.66 in October, averaged $12.20 in November, and started December strong at $12.44 before dropping 20% to close the year at $9.88. In January 2025, Internet Computer peaked at $12.5 but soon fell, hitting a low of $5.9 in February. In April, ICP maintained an average of $5.03, and in June, it traded between $4.34 and $6.31. July saw a high of $6.25 and a low of $4.67. In August, ICP maintained a trading range of $4.61 to $6.08, and in September, the coin traded at an average price of $4.65. In November, ICP traded between $3.58 and $9.73, and in December 2025, the coin is traded between $2.67 and $3.75. In January 2026, the coin traded between $2.59 and $4.78, and in February, it traded between $2.02 and $2.69. In March, ICP traded between $2.17 and $2.84, and in April, it is trading at an average price of $2.4.
22 Apr 2026, 02:14
US admiral calls Bitcoin an instrument for US ‘power projection’

US Navy Admiral Samuel Paparo said Bitcoin’s proof-of-work technology has "really important" computer science applications when it comes to cybersecurity.
22 Apr 2026, 02:11
Jeff Bezos’ secretive AI lab is close to securing another $10 billion in funding

Amazon’s Jeff Bezos is back to building again since leaving the e-commerce giant in 2021, and his latest venture, a physical AI lab, is already close to a $38 billion valuation. Bezos’ new company, codenamed Project Prometheus, is close to finalizing a $10 billion fundraising deal from JPMorgan, BlackRock, and other investors, The Financial Times reported Tuesday, citing people with knowledge of the matter. Prometheus launched in November 2025, raising a $6.2 billion in seed capital from investors, including Bezos himself. Reports note that demand from institutional investors was so high that they had to extend the round to include an additional $10 billion. The new funding round, which is expected to close soon, puts the company at a $38bn valuation. Why so much interest in Bezos’ AI lab? Project Prometheus is touted as a physical AI laboratory that has the tech billionaire Jeff Bezos back to serving an operational role again, since leaving Amazon in 2021. The company aims to build novel AI systems that understand the physical laws of the universe and can interact with the physical environment, especially in manufacturing and industrial processes. Such systems are quite different from the AI models by OpenAI, Anthropic, Google, and other popular LLM companies. The major challenge for companies building physical AI systems is usually the data moat. Large-language models are trained on texts, code, images, and data scraped from the internet, which are readily available and in abundance. Physical AI requires real-world interaction data, like sensor readings, manufacturing processes, tactile feedback, trajectories, failures in messy environments, etc., which are usually proprietary and expensive to collect. Elon Musk’s Tesla is a good textbook case of the data situation in physical AI space. Tesla reportedly has 5-6 million electric cars fitted with Full Self-Driving (Supervised) hardware and software, driving more than 50 billion miles every year. The real-world driving data collected gives the company an edge over competitors in improving the self-driving experience. That’s a leverage Prometheus is going to pursue with the freshly raised capital, with the goal of becoming “one of the most important companies in the world,” says Arch’s Nelsen, a Prometheus director. But the AI company wants to do this through a holding company. Prometheus will shop companies for data Bezos and Prometheus co-chief executive Vikram Bajaj, a former Google executive, are also leading separate talks to raise tens of billions of dollars for the holding company, according to people familiar with the matter. All those billions will mostly go toward acquiring companies, especially in the areas of engineering, architecture, and design companies, which two execs believe would be disrupted by Prometheus. The holding company would serve more like a “manufacturing transformation vehicle,” the people said . With investments in such businesses, the company can collect real-world data to train Prometheus AI systems. The AI lab is mostly still in its early phase. It has recruited over 100 employees, including talent from big names like Meta, OpenAI, and DeepMind. In other news, Amazon has also invested an additional $5 billion in Anthropic, with the option to commit up to $20 billion more over time. Anthropic is committed to spending over $100 billion on AWS technologies over the next decade, as part of the new deal, Cryptopolitan reported Tuesday. The crypto card with no spending limits. Get 3% cashback and instant mobile payments. Claim your Ether.fi card.
22 Apr 2026, 00:10
Meta AI Training Sparks Alarm: Company to Record Employee Keystrokes for Model Development

BitcoinWorld Meta AI Training Sparks Alarm: Company to Record Employee Keystrokes for Model Development In a move that has ignited immediate privacy concerns across the technology sector, Meta announced on April 21, 2026, that it will begin recording employee keystrokes and mouse movements to train its artificial intelligence models. This controversial decision represents a significant escalation in corporate data collection practices and raises fundamental questions about workplace surveillance boundaries in the AI era. Meta’s AI Training Strategy and Employee Data Collection Meta’s new initiative involves deploying internal tools that capture how employees interact with specific applications during their workday. According to company statements provided to Reuters and Bitcoin World, this data collection focuses on routine computer interactions including mouse movements, button clicks, and navigation through dropdown menus. The company argues these real-world examples are essential for building AI agents that can effectively assist people with everyday computer tasks. Meta spokesperson explained the rationale behind this approach: “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them.” The company emphasizes that safeguards exist to protect sensitive content and that collected data serves exclusively for AI training purposes. However, privacy advocates immediately questioned the adequacy of these protections. The Expanding AI Data Supply Chain Meta’s announcement represents just one development in a broader industry trend where technology companies increasingly mine internal corporate communications for AI training material. Last week, reports surfaced about startups being approached for access to their historical Slack archives, Jira tickets, and internal messaging platform data. These communications, once considered private corporate records, are now becoming valuable commodities in what industry observers call the “AI data supply chain.” The accelerating demand for training data stems from fundamental requirements of large language models and AI systems. These programs require massive datasets to learn patterns, understand context, and generate appropriate responses. As publicly available internet data becomes increasingly utilized and sometimes restricted, companies are turning inward to find new data sources. Privacy Implications and Ethical Considerations Privacy experts express significant concerns about this emerging practice. Dr. Elena Rodriguez, director of the Center for Digital Ethics at Stanford University, notes: “When yesterday’s internal communications become today’s training data, we’re fundamentally redefining the boundaries of workplace privacy. Employees reasonably expect their work communications to remain within the company, not become fodder for machine learning algorithms.” The ethical implications extend beyond simple privacy concerns. Questions arise about informed consent, data anonymization effectiveness, and the potential for sensitive information to inadvertently become part of training datasets. Furthermore, there are concerns about how this data might influence AI behavior and whether it could perpetuate internal corporate biases. Industry Context and Competitive Pressures Meta’s move occurs within a highly competitive AI development landscape where access to quality training data represents a significant competitive advantage. Other major technology companies, including Google, Microsoft, and Amazon, have also expanded their data collection methodologies, though approaches vary significantly in transparency and scope. The table below illustrates different approaches to AI training data collection among major tech firms: Company Primary Data Sources Employee Data Usage Public Transparency Meta Public web, licensed content, employee interactions Keystrokes, mouse movements, application usage Medium (reactive disclosure) Google Search data, YouTube, public datasets Limited internal testing data High (published research) Microsoft GitHub, professional networks, enterprise data Anonymized productivity patterns Medium (selective disclosure) OpenAI Licensed content, web archives, partnerships Minimal direct employee data Variable (evolving policies) Technical Implementation and Safeguards According to Meta’s technical documentation, the data collection system operates with several layers of protection. The company claims to implement: Selective application monitoring : Only specific, approved applications undergo monitoring Content filtering algorithms : Systems automatically redact sensitive information before storage Access controls : Strict limitations on which personnel can access raw data Data encryption : End-to-end encryption during transmission and storage Retention limits : Automatic deletion of data after training completion However, cybersecurity experts question whether these safeguards can completely prevent data leakage or misuse. “The fundamental challenge,” explains cybersecurity analyst Michael Chen, “is that to train AI on human-computer interaction patterns, you need to capture those patterns in their authentic form. Any filtering or anonymization potentially reduces the training data’s value, creating tension between utility and privacy.” Legal and Regulatory Landscape The legal framework surrounding employee data collection varies significantly by jurisdiction. In the European Union, the General Data Protection Regulation (GDPR) imposes strict requirements for employee consent and data minimization. California’s Consumer Privacy Act (CCPA) and newer state privacy laws also create compliance challenges for widespread employee monitoring. Employment law specialists note that traditional workplace monitoring laws were written before the advent of AI training requirements. “Existing regulations generally address surveillance for productivity monitoring or security purposes,” says labor attorney Sarah Johnson. “Using employee behavior as training data for commercial AI systems represents a new category that existing laws don’t adequately cover.” Employee Perspectives and Workplace Culture Initial reactions from Meta employees, gathered through anonymous professional networks, reveal mixed responses. Some technical staff express understanding of the technical necessity, while others voice discomfort with the monitoring’s scope. “There’s a difference between knowing your work is being evaluated and knowing your every keystroke might train a commercial AI system,” commented one software engineer anonymously. Workplace culture experts warn that such monitoring could impact employee trust and innovation. “When employees feel constantly monitored, they may become more risk-averse and less creative,” observes organizational psychologist Dr. Robert Kim. “The knowledge that exploratory work or early drafts could become permanent training data might inhibit the very innovation these AI systems are meant to enhance.” The Future of AI Development and Data Ethics Meta’s approach highlights broader questions about sustainable and ethical AI development. As public web data becomes increasingly utilized and sometimes restricted through robots.txt files and other technical measures, AI companies face growing pressure to find alternative data sources. This pressure creates incentives to look inward to corporate data, raising fundamental questions about consent and data ownership. Industry analysts predict several potential developments: Increased transparency requirements : Regulators may mandate clearer disclosures about data sources Employee data rights : New rights specifically addressing AI training use of employee data Synthetic data alternatives : Increased investment in generating artificial training data Industry standards : Cross-company agreements on ethical data sourcing practices Conclusion Meta’s decision to record employee keystrokes for AI training represents a significant moment in the evolution of artificial intelligence development and workplace privacy standards. While the company presents this as a technical necessity for building more capable AI assistants, the move raises profound questions about boundaries between corporate innovation and individual privacy rights. As AI systems become increasingly integrated into workplace environments, the tension between data needs and ethical considerations will likely intensify, requiring new frameworks for balancing technological advancement with fundamental workplace protections. The Meta AI training initiative serves as a case study in these emerging challenges, highlighting the complex interplay between innovation, privacy, and ethics in the rapidly evolving AI landscape. FAQs Q1: What specific data is Meta collecting from employees? Meta is collecting keystroke patterns, mouse movements, button clicks, and navigation behaviors within specific applications. The company states this data helps train AI models to better understand how people interact with computers for everyday tasks. Q2: How is Meta protecting sensitive employee information during this data collection? According to Meta, safeguards include content filtering algorithms that redact sensitive information, encryption during transmission and storage, strict access controls, and data deletion after training completion. However, privacy experts question whether these measures can completely prevent potential data exposure. Q3: Is this type of employee data collection legal? Legality varies by jurisdiction. In regions with strong privacy laws like the EU, such collection would require explicit consent and demonstrate necessity. In the United States, regulations are more fragmented, though states like California have implemented stronger privacy protections that may apply. Q4: How does Meta’s approach compare to other tech companies’ AI training methods? While most major tech companies use various data sources for AI training, Meta’s systematic collection of employee interaction data represents a more direct approach. Other companies typically rely more on public web data, licensed content, or anonymized usage patterns rather than direct employee monitoring. Q5: What are the potential long-term implications of using employee data for AI training? Long-term implications could include redefined workplace privacy norms, potential impacts on employee trust and innovation, new regulatory frameworks specifically addressing AI training data, and possible shifts toward synthetic data alternatives to reduce privacy concerns while maintaining AI development progress. This post Meta AI Training Sparks Alarm: Company to Record Employee Keystrokes for Model Development first appeared on BitcoinWorld .
22 Apr 2026, 00:00
Anthropic Mythos Breach: Unauthorized Access to Exclusive AI Cybersecurity Tool Sparks Critical Enterprise Security Concerns

BitcoinWorld Anthropic Mythos Breach: Unauthorized Access to Exclusive AI Cybersecurity Tool Sparks Critical Enterprise Security Concerns San Francisco, CA – April 30, 2025 – Anthropic’s exclusive cybersecurity tool Mythos has reportedly been accessed by an unauthorized group through a third-party vendor environment, according to a Bloomberg investigation. This development raises significant concerns about the security of advanced AI systems designed for enterprise protection. The breach occurred despite Anthropic’s carefully controlled release strategy for Mythos, a tool the company specifically designed to bolster corporate security defenses. Anthropic Mythos Breach Investigation Underway Anthropic confirmed it is investigating reports of unauthorized access to the Claude Mythos Preview. The company released this statement to Bitcoin World: “We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments.” Importantly, Anthropic’s internal investigation has found no evidence that the unauthorized activity impacted the company’s core systems. The breach appears limited to the preview environment accessed through vendor channels. The unauthorized group reportedly gained access on the same day Anthropic publicly announced Mythos. They employed multiple strategies to penetrate the system. According to Bloomberg’s sources, the group made educated guesses about the model’s online location. They based these guesses on knowledge of Anthropic’s formatting patterns for other models. The group’s activities highlight potential vulnerabilities in third-party security protocols. Third-Party Vendor Security Vulnerabilities Exposed The breach pathway involved a third-party contractor working with Anthropic. Bloomberg reported that the unauthorized group leveraged “access” enjoyed by an individual currently employed at this contractor. This incident underscores the persistent security challenges posed by extended enterprise ecosystems. Third-party vendors often represent the weakest link in corporate security chains. Organizations increasingly rely on specialized contractors for various functions. However, this reliance creates additional attack surfaces. The Anthropic Mythos situation demonstrates how sophisticated actors can exploit these relationships. Security experts consistently warn about third-party risks. They note that vendor security assessments often fail to keep pace with evolving threats. Key Timeline: Anthropic Mythos Security Incident Date Event April 2025 Anthropic announces Mythos cybersecurity tool Same Day Unauthorized group reportedly gains access April 30 Bloomberg publishes investigation findings Ongoing Anthropic conducts internal security review Enterprise AI Security Implications The Mythos breach carries significant implications for enterprise AI security. Anthropic designed Mythos specifically to enhance corporate cybersecurity defenses. The company acknowledged the tool’s dual-use potential during its announcement. In the wrong hands, Mythos could theoretically be weaponized against the very systems it was built to protect. This incident raises critical questions about secure AI deployment. Enterprise organizations must consider several factors: Access Control Protocols: How organizations manage permissions for powerful AI tools Vendor Risk Management: Security assessments for third-party contractors Monitoring Capabilities: Detecting unauthorized usage of AI systems Incident Response: Procedures for potential AI security breaches Unauthorized Group’s Motivations and Activities Bloomberg’s report provides intriguing details about the unauthorized group. Members belong to a Discord channel focused on discovering information about unreleased AI models. The group’s source told Bloomberg they are “interested in playing around with new models, not wreaking havoc with them.” This distinction matters for understanding potential risks. The group has reportedly used Mythos regularly since gaining access. They provided Bloomberg with evidence including screenshots and a live software demonstration. Their activities appear focused on exploration rather than malicious exploitation. However, security professionals caution that even non-malicious unauthorized access creates risks. It establishes pathways that malicious actors could later exploit. Cybersecurity experts emphasize that intent can change rapidly. A group initially interested in exploration might later decide to leverage access for other purposes. Alternatively, their access methods could be discovered and replicated by truly malicious actors. The digital security landscape evolves constantly. Project Glasswing and Controlled Release Strategy Anthropic released Mythos through an initiative called Project Glasswing. This program provided limited access to select vendors including major technology companies like Apple. The controlled release strategy aimed specifically to prevent usage by bad actors. Anthropic recognized the tool’s potential for misuse from the beginning. Project Glasswing represents a growing trend in responsible AI deployment. Companies increasingly implement phased releases for powerful AI systems. This approach allows for: Real-world testing in controlled environments Identification of potential security vulnerabilities Gradual scaling based on performance and safety data Establishment of usage protocols and best practices Despite these precautions, the reported breach demonstrates the challenges of completely securing advanced AI systems. Even limited releases to trusted partners create potential exposure points. The incident will likely influence future AI release strategies across the industry. Industry Response and Security Best Practices The cybersecurity community is closely monitoring the Anthropic Mythos situation. Industry experts note that AI security breaches require specialized response protocols. Traditional data breach procedures may not adequately address AI-specific risks. These include model extraction, prompt injection attacks, and training data poisoning. Enterprise security teams should review several areas following this incident: Vendor Security Assessments: Organizations must implement rigorous vetting for all third-party vendors with AI system access. These assessments should go beyond standard security questionnaires. They must include specific evaluation of AI security competencies and protocols. Access Monitoring: Continuous monitoring of AI system usage patterns becomes essential. Anomaly detection systems should flag unusual access patterns or usage volumes. These systems must account for the unique characteristics of AI tool interactions. Incident Response Planning: Security teams need AI-specific incident response plans. These plans should address scenarios like model compromise, unauthorized access, and potential weaponization. Regular tabletop exercises help prepare organizations for real incidents. Broader Implications for AI Security Landscape The reported Mythos breach occurs amid growing concerns about AI security. As AI systems become more powerful and integrated into critical infrastructure, their security becomes increasingly important. Several trends are emerging in the AI security landscape: First, specialized AI security roles are becoming more common. Organizations now hire professionals focused specifically on securing AI systems. These roles require understanding both traditional cybersecurity and unique AI vulnerabilities. Second, regulatory attention is increasing. Governments worldwide are developing frameworks for AI security and safety. Incidents like the Mythos breach will likely influence these regulatory developments. They demonstrate real-world risks that regulations must address. Third, the security research community is expanding its focus on AI. More researchers are investigating AI-specific attack vectors and defense mechanisms. This growing body of knowledge will help improve AI security over time. Conclusion The reported unauthorized access to Anthropic’s Mythos cybersecurity tool highlights critical challenges in enterprise AI security. While Anthropic’s investigation found no impact on its core systems, the incident reveals vulnerabilities in third-party vendor security protocols. The breach demonstrates how even carefully controlled AI releases can face security challenges. As AI systems become more integrated into enterprise operations, robust security measures become increasingly essential. The Anthropic Mythos situation serves as an important case study for organizations deploying advanced AI tools. It underscores the need for comprehensive security strategies that address both internal systems and extended vendor networks. FAQs Q1: What is Anthropic’s Mythos cybersecurity tool? Mythos is an AI-powered cybersecurity tool developed by Anthropic for enterprise security applications. The tool is designed to enhance corporate security defenses but has potential dual-use capabilities that could be exploited by malicious actors. Q2: How did the unauthorized group access Mythos? The group reportedly gained access through a third-party vendor environment. They used multiple strategies including educated guesses about the model’s online location based on Anthropic’s formatting patterns for other models. Q3: Has Anthropic confirmed the breach? Anthropic confirmed it is investigating reports of unauthorized access but stated its investigation has found no evidence that the activity impacted the company’s core systems. The investigation focuses on the preview environment accessed through vendor channels. Q4: What is Project Glasswing? Project Glasswing is Anthropic’s initiative for controlled release of the Mythos tool. It provides limited access to select vendors including major technology companies, with the goal of preventing misuse by bad actors. Q5: What are the broader implications for AI security? This incident highlights vulnerabilities in third-party vendor security and the challenges of securing advanced AI systems. It will likely influence AI release strategies, regulatory developments, and enterprise security practices across the industry. This post Anthropic Mythos Breach: Unauthorized Access to Exclusive AI Cybersecurity Tool Sparks Critical Enterprise Security Concerns first appeared on BitcoinWorld .
21 Apr 2026, 23:15
Anthropic may still get a Defense Department deal after Donald said talks at the White House went well

Anthropic moved back into a Washington fight on Tuesday after President Donald Trump said a deal for the company’s AI models inside the Department of Defense could still happen. Speaking on CNBC’s “Squawk Box,” Trump said, “it’s possible” there will be an agreement that allows Anthropic technology to be used by the military. He also said: “They came to the White House a few days ago, and we had some very good talks with them, and I think they’re shaping up. They’re very smart, and I think they can be of great use.” The remarks marked a change in tone after months of conflict between Anthropic, the Pentagon, and the Trump administration. In March, the DOD labeled Anthropic a supply chain risk, saying the company’s technology could threaten U.S. national security. The label forced defense contractors to certify that they were not using Anthropic’s Claude models in military work. Trump then told federal agencies to “IMMEDIATELY CEASE all use of Anthropic’s technology,” while adding that his administration would “not do business with them again.” Pentagon keeps using Claude while Anthropic fights the blacklist and reopens talks That hard line did not fully hold. The Pentagon kept using Claude during the war with Iran. Anthropic later sued the Trump administration in San Francisco and Washington, D.C., to reverse the blacklist. Trump’s Truth Social directive has also been temporarily blocked by a federal judge. Talks between both sides then started opening again. Anthropic chief executive Dario Amodei met senior administration officials on Friday to discuss Mythos, the company’s new AI model with cybersecurity capabilities. White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent attended that meeting. A White House spokesperson allegedly called the discussion “productive and constructive.” Earlier this month, Anthropic announced Mythos and limited its release to a small group of companies because of the model’s cyber power. The company said it has been holding “ongoing discussions” with U.S. government officials about Mythos. Mythos arrived after the worst point of Anthropic’s dispute with the DOD. The launch appears to have reopened the door to better ties with the administration. Dario also joined an early April call with Bessent and Vice President JD Vance to discuss AI cyber readiness alongside other major tech CEOs. Anthropic signed a $200 million Pentagon contract in July, but negotiations over deploying Claude on the DOD’s GenAI.mil platform collapsed in September. Banks rush toward Mythos as Anthropic prepares a wider European rollout The company is facing pressure outside Washington too. In New York and Paris on April 21, Reuters reported that Anthropic plans to give European banks access to Mythos soon, citing three people familiar with the matter. This comes as banks scramble to test the model after large U.S. banks received the first access. Cybersecurity experts see Mythos as a challenge for banks and their old technology systems, and those fears drove warnings from regulators and policymakers at last week’s International Monetary Fund spring meeting in Washington. Scott Keipper, EY’s Americas Financial Services Technology Consulting Leader, said the speed of the technology is outrunning the governance, operating models, and control systems most banks were built to handle. He said that the gap is widening the distance between finding risk and fixing it. Keipper also said banks need to move past one-time cybersecurity fixes and instead build AI into risk management across technology, operations, governance, and oversight. One person familiar with the matter told Reuters that Anthropic wants to expand Mythos access to European and UK banks, along with other organizations. That person said security checks are part of the rollout process. Another person allegedly said European banks could get access within days, while the first person said the timeline could still take days or weeks. Bloomberg had already reported that Anthropic was preparing to release Mythos to UK financial institutions soon. There’s a middle ground between leaving money in the bank and rolling the dice in crypto. Start with this free video on decentralized finance .











































