News
12 Mar 2026, 05:45
Crypto ATM Fraud Losses Skyrocket to $333M in US, Fueled by Alarming AI Deepfake Scams

BitcoinWorld Crypto ATM Fraud Losses Skyrocket to $333M in US, Fueled by Alarming AI Deepfake Scams Financial regulators and cybersecurity experts are sounding alarms after a new report revealed staggering losses from cryptocurrency ATM scams across the United States, with fraud totaling $333 million last year alone. This dramatic figure, reported by cybersecurity firm CertiK and covered by Cointelegraph, highlights a rapidly evolving threat landscape where criminal organizations increasingly exploit the very features that make crypto ATMs convenient: speed and relative anonymity. The analysis directly links the surge to sophisticated fraud groups now deploying artificial intelligence deepfake technology to bypass security measures and manipulate victims. Crypto ATM Fraud Losses Expose Critical Security Gaps The $333 million in reported losses marks a significant escalation in financial crimes targeting digital asset kiosks. These machines, often located in convenience stores, gas stations, and shopping malls, allow users to convert cash into cryptocurrencies like Bitcoin or Ethereum within minutes. Consequently, their fast transaction speeds present a major attraction for legitimate users and criminals alike. The CertiK report emphasizes that limited identity verification protocols at many kiosks create an easy avenue for theft. Unlike traditional bank transactions, which may involve multi-factor authentication and waiting periods, crypto ATM transactions can often be finalized in under five minutes with minimal oversight. Furthermore, the pseudo-anonymous nature of blockchain transactions complicates recovery efforts for stolen funds. Once cryptocurrency leaves a victim’s wallet and moves through the decentralized ledger, tracing and retrieving it becomes exceptionally difficult for law enforcement. This technical reality emboldens fraudsters who operate with a perceived lower risk of getting caught. The convergence of quick cash conversion and difficult asset recovery has effectively turned some crypto ATMs into high-risk points for financial crime. The Rising Threat of AI Deepfake Technology in Scams Cybersecurity analysts point to a dangerous new trend propelling these losses: the adoption of AI-generated deepfakes by criminal networks. This technology uses artificial intelligence to create highly convincing fake audio or video recordings. Scammers employ these deepfakes to impersonate trusted figures, such as family members, tech support agents, or government officials, during real-time calls with victims. For instance, a fraudster might use a deepfake voice clone of a grandchild in distress to urgently request money via a crypto ATM. The emotional manipulation, combined with the perceived authenticity of the voice, pressures victims into making rapid, irreversible transactions. Previously, such scams relied on text-based phishing or less convincing voice calls. However, the accessibility of AI tools has lowered the barrier for creating persuasive forgeries. A report from the Federal Trade Commission (FTC) in late 2024 noted a 150% year-over-year increase in complaints mentioning voice-cloning technology in fraud schemes. The integration of this technology into crypto ATM scams represents a natural and sinister evolution, exploiting both human psychology and technological infrastructure weaknesses. Expert Analysis on the Mechanics of the Scam Jane Kellerman, a former FBI financial crimes investigator and current cybersecurity consultant, explains the typical fraud workflow. “The scam often starts with a targeted phishing attempt or a data breach that gives criminals a victim’s basic information and phone number,” Kellerman states. “They then use AI software to synthesize a voice from short audio clips found online—perhaps from a social media video—of a relative. The victim receives a panicked call from what sounds like their loved one, claiming they need bail money or face an emergency. The scammer instructs them to withdraw cash and deposit it immediately into a specific crypto wallet via a nearby ATM, stressing that time is critical and that traditional wire transfers are too slow.” This sense of urgency is crucial. It short-circuits the victim’s normal critical thinking and due diligence. The physical act of using a cash-based ATM also feels more tangible and less suspicious to some than an online transfer, even though the destination is a digital wallet controlled by criminals. The table below outlines the common steps in a modern crypto ATM deepfake scam: Step Action by Fraudster Exploited Vulnerability 1. Reconnaissance Gathers victim data (phone, family names) from social media or data leaks. Personal data oversharing online. 2. Deepfake Creation Uses AI tools to clone a relative’s voice from online audio. Availability of personal media online and accessible AI tech. 3. Social Engineering Call Makes urgent call using deepfake voice, creating a fabricated crisis. Human emotional response and trust. 4. Transaction Direction Guides victim to a specific crypto ATM and provides a wallet QR code. Speed and anonymity of crypto ATM transactions. 5. Cash Conversion & Flight Receives crypto, then uses mixers or exchanges to launder funds. Irreversibility of blockchain transactions and cross-jurisdictional challenges. Regulatory and Industry Responses to Mounting Losses In response to the escalating fraud, regulatory bodies and the cryptocurrency industry are beginning to take action. The Financial Crimes Enforcement Network (FinCEN) has classified certain crypto kiosk operators as Money Services Businesses (MSBs), subjecting them to Anti-Money Laundering (AML) and Know Your Customer (KYC) regulations. However, enforcement and compliance levels can vary significantly between operators and states. Some jurisdictions are now considering legislation to mandate stricter identity checks for transactions above a certain threshold, potentially slowing the process but adding a critical security layer. Simultaneously, responsible crypto ATM operators are implementing voluntary safeguards. These measures include: Lower transaction limits for anonymous cash deposits. Enhanced on-screen warnings about common scams during the transaction flow. Extended transfer delays for first-time users or large amounts, allowing a brief cooling-off period. Integration with identity verification services that require a government ID scan for larger transactions. Despite these efforts, a patchwork of state regulations and the competitive pressure to offer user-friendly services create challenges for uniform security standards. The industry faces a difficult balance between maintaining the accessibility that defines crypto ATMs and implementing protections robust enough to deter sophisticated fraud rings. The Broader Impact on Crypto Adoption and Consumer Trust The $333 million loss figure represents more than just stolen money; it signifies a growing threat to consumer trust in cryptocurrency infrastructure. For mainstream adoption to continue, potential users must feel confident that on-ramps like ATMs are secure. High-profile fraud cases generate negative media coverage and can deter newcomers who are already cautious about the volatility and complexity of digital assets. This erosion of trust poses a long-term risk to the entire ecosystem, potentially stifling innovation and legitimate use cases. Moreover, these losses have real-world consequences for victims, who are often elderly or otherwise vulnerable individuals. The irreversible nature of cryptocurrency transactions means they rarely recover their funds. Victim advocacy groups report increased cases of severe financial and emotional distress linked to these scams, highlighting the human cost behind the statistical headline. Conclusion The revelation that US crypto ATM fraud losses hit $333 million last year serves as a critical wake-up call for regulators, industry operators, and consumers. This staggering sum underscores how advanced threats like AI deepfake technology are exploiting systemic vulnerabilities in fast, anonymous transaction systems. Addressing this crisis requires a coordinated multi-stakeholder approach: robust regulatory frameworks, proactive security measures by ATM operators, and widespread public education on recognizing social engineering tactics. As cryptocurrency continues to integrate into the financial mainstream, ensuring the security of its physical access points will be paramount to preventing further losses and safeguarding the future of digital asset adoption. FAQs Q1: What is a crypto ATM, and how does it work? A crypto ATM, or Bitcoin ATM, is a physical kiosk that allows individuals to buy (and sometimes sell) cryptocurrencies using cash or a debit card. Users scan a wallet QR code, insert money, and the machine sends the equivalent cryptocurrency to their digital wallet, often within minutes. Q2: How are AI deepfakes used in these scams? Scammers use AI software to create realistic fake audio or video of a trusted person, like a family member. They then call the victim, using this deepfake to pretend there is an emergency requiring immediate cash, which they instruct the victim to send via a crypto ATM. Q3: Why are crypto ATMs particularly vulnerable to this fraud? They enable very fast conversion of cash to irreversible cryptocurrency with relatively low identity checks compared to banks. This speed and anonymity benefit users but also provide a perfect tool for fraudsters pressuring victims to act quickly. Q4: What can I do to protect myself from a crypto ATM scam? Be extremely skeptical of any urgent request for money, especially via cryptocurrency. Verify the person’s identity by calling them back on a known number. Never deposit money into a crypto wallet at someone else’s direction during a stressful call. Remember that legitimate entities will not demand payment via cash-to-crypto machines for emergencies. Q5: Are there any regulations for crypto ATMs to prevent this? In the US, crypto ATM operators are generally required to register as Money Services Businesses and comply with AML laws. However, specific identity verification requirements vary by state and operator, leading to inconsistent security levels across the network. This post Crypto ATM Fraud Losses Skyrocket to $333M in US, Fueled by Alarming AI Deepfake Scams first appeared on BitcoinWorld .
12 Mar 2026, 04:10
Hackers Hijack Bonk.fun Domain, Deploy Wallet-Draining Phishing Prompt

Browser warnings flagged the site for suspected phishing after attackers pushed a fake TOS message designed to trick users.
12 Mar 2026, 00:00
AI Actor Tilly Norwood Sparks Outrage with Cringeworthy Debut Song, Igniting Hollywood Ethics Debate

BitcoinWorld AI Actor Tilly Norwood Sparks Outrage with Cringeworthy Debut Song, Igniting Hollywood Ethics Debate The debut of a musical track by AI-generated actor Tilly Norwood has ignited a fierce debate within the entertainment industry, highlighting growing tensions between technological innovation and artistic integrity. Particle6, the production company behind the synthetic persona, released the music video for “Take the Lead” this week, prompting immediate criticism from established actors and industry unions. This event marks a significant escalation in the use of AI for creating fully realized, media-producing characters, moving beyond static images or voice synthesis. Tilly Norwood’s AI-Generated Song Draws Swift Industry Condemnation Particle6 first introduced Tilly Norwood to the public in the fall of 2024. The reveal of a fully AI-generated actor designed for film and television roles was met with immediate concern. Golden Globe-winning actor Emily Blunt voiced a sentiment shared by many, telling Variety, “Good Lord, we’re screwed. Come on, agencies, don’t do that. Please stop.” The Screen Actors Guild‐American Federation of Television and Radio Artists (SAG-AFTRA) issued a formal statement, arguing that “‘Tilly Norwood’ is not an actor; it’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation.” The union emphasized that such technology “creates the problem of using stolen performances to put actors out of work.” Despite this backlash, Particle6 proceeded with its next phase: establishing Tilly Norwood as a cross-media personality. The release of “Take the Lead” represents a strategic move to build a fanbase and narrative for the character. The song’s lyrics directly address the controversy, with lines like “They say it’s not real, that it’s fake, but I am still human, make no mistake.” This meta-commentary has been cited by critics as a key example of the project’s conceptual dissonance. Deconstructing the AI Music Video and Its Production The music video for “Take the Lead” is a technically complex production. Particle6 reports that eighteen individuals contributed to its creation, including designers, AI prompt engineers, and video editors. This human-heavy backend contrasts sharply with the fully AI-generated front-facing persona. The video features Tilly Norwood strutting through a data center—a visual metaphor for her origin—before transitioning to a stage where she performs for a crowd of computer-generated spectators. The song’s musical composition has drawn comparisons to early-2000s pop, particularly the work of artists like Sara Bareilles. However, critics argue it lacks the emotional authenticity of its influences. The chorus serves as a call to action, not for human artists, but for other AI entities: “Actors, it’s time to take the lead… AI’s not the enemy, it’s the key.” This framing positions AI not as a tool for humans, but as an independent creative class. Ethical and Legal Implications of Synthetic Performers The rise of characters like Tilly Norwood raises profound legal and ethical questions that the industry is scrambling to address. The core issue revolves around consent and compensation . AI models are trained on vast datasets of existing performances. SAG-AFTRA and other advocates contend this constitutes intellectual property theft if done without licenses. Furthermore, the creation of a synthetic actor who can work indefinitely without pay threatens to destabilize labor markets for human performers. Another critical concern is authenticity and cultural impact . Can art derived from statistical models of existing work offer genuine cultural commentary or innovation? Critics echo past complaints about derivative human art, such as Pitchfork’s infamous 0.0 review of Jet’s “Shine On,” where editors lamented “knuckle-dragging and Xeroxed” music. The difference, experts note, is scale and origin: while human artists are inspired by predecessors, AI models are fundamentally built from them. The Broader Landscape of AI in Music and Entertainment Tilly Norwood is not the first AI entity to venture into music. The digital persona Xania Monet previously gained attention when an AI-generated song attributed to her, “How Was I Supposed to Know?,” charted on Billboard’s R&B charts. That track reportedly involved human lyricists, blending AI and human input. The Norwood project differs by presenting a completely synthetic origin story and aiming for a mainstream pop aesthetic. The technology enabling this is advancing rapidly. AI music generators like Suno and Udio can now produce full-length songs from simple text prompts. Meanwhile, video generation tools can create realistic scenes. Particle6’s project represents an attempt to bundle these capabilities into a marketable, persistent character. The potential business model is clear: a studio could own a stable of AI actors, musicians, and influencers, generating content without talent fees, scheduling conflicts, or personal controversies. Audience Reception and the Question of Relatability A central challenge for synthetic media is forging a genuine connection with audiences. Art often resonates through shared human experience—joy, loss, love, struggle. Tilly Norwood’s song tackles a uniquely non-human dilemma: the experience of being disregarded for being an AI. As one critic noted, this creates a song “about something that literally no human will ever experience.” This inherent disconnect may limit the commercial and emotional ceiling for such content, regardless of its technical polish. Industry analysts are watching audience metrics closely. Will a character like Norwood develop a dedicated following, perhaps among tech enthusiasts? Or will she remain a novelty? Early comments on the video’s hosting platform skew heavily negative, with viewers criticizing the music’s quality and the project’s premise. However, the mere existence of such a high-profile experiment signals a new chapter in content creation. Conclusion The controversy surrounding AI actor Tilly Norwood and her debut song “Take the Lead” is a microcosm of a larger industry transformation. It forces a confrontation between the relentless march of generative AI technology and the deeply human-centric traditions of storytelling and performance. While the technical achievement is notable, the project has intensified debates over ethics, copyright, and the very soul of entertainment. The reaction from figures like Emily Blunt and SAG-AFTRA demonstrates that the human creative community is prepared to fight for its value. The journey of Tilly Norwood will likely serve as a critical case study, informing future regulations, union contracts, and audience expectations as the line between human and synthetic artistry continues to blur. FAQs Q1: What is Tilly Norwood? Tilly Norwood is a fully AI-generated actor and media persona created by the production company Particle6. She is not a human performer but a digital character designed to star in films, television, and now music. Q2: Why are actors and unions like SAG-AFTRA opposed to AI actors? Unions argue that AI actors are trained on the work of human performers without consent or compensation, which they view as intellectual property theft. They also warn that synthetic performers threaten job displacement and devalue human artistry and experience. Q3: How was Tilly Norwood’s song “Take the Lead” created? While the front-facing performer is AI, the production involved eighteen human contributors, including designers and editors. The song itself was likely generated using AI music software, then refined and paired with a video featuring the CGI character. Q4: Has AI-generated music been successful before? Yes, to some extent. The AI persona Xania Monet had a song chart on Billboard’s R&B charts. However, these projects often blend AI generation with human curation. Tilly Norwood’s project is notable for its attempt to present a wholly synthetic artist with a narrative backstory. Q5: What does this mean for the future of entertainment? The development signals a likely increase in synthetic media. It will force new legal frameworks around copyright and likeness, reshape labor agreements, and challenge audiences to define what they value in art—technical perfection or human connection. This post AI Actor Tilly Norwood Sparks Outrage with Cringeworthy Debut Song, Igniting Hollywood Ethics Debate first appeared on BitcoinWorld .
11 Mar 2026, 23:00
XRP Suppression: Ripple CEO Says ‘They Were Afraid Of Us’

For years, Ripple and XRP faced hostility that went beyond typical market skepticism. Lawsuits, regulatory pressure , and a relentless wave of negative sentiment followed the company at nearly every turn before it finally reached a legal resolution with the US SEC in 2025 . At a recent XRP conference in Sydney, Australia, Ripple’s top executives spoke openly about what they now believe was happening behind the scenes of the previously heightened regulatory scrutiny. Ripple CEO Asserts They Were Afraid Of XRP Crypto analyst X Finance Bull has shared recent updates about XRP and Ripple’s suppression following its SEC lawsuit. In a post on X, he presented a video where Ripple’s CEO, Brad Garlinghouse, spoke about the challenges the company faced during XRP’s early days . In the conference, he told attendees that the token was not targeted because it was weak, but because of the strength of its underlying technology. Garlinghouse said “they were afraid of us,” speaking about the “forces” that had worked against Ripple and XRP over the years. He argued that the technology behind the project was ahead of its time and posed a threat to existing financial systems. As a result, the threat triggered a sustained wave of opposition against Ripple and XRP, limiting their growth. Also speaking at the conference, Monica Long, President of Ripple , recalled that the early atmosphere surrounding the crypto company had been visibly uncomfortable. She described a period marked by intense hostility toward Ripple that felt disconnected from any wrong the company had committed. She noted that what made it harder to process was that the source of the negativity was never clear. Long also revealed that during that time, it did not feel like organic criticism from competitors or skeptics. Rather, it felt like a force working against the company’s and the altcoin’s growth that no one could quite identify or explain. Epstein Files Connects The Dots Garlinghouse picked up the thread, highlighting that Chris Larsen , co-founder and Chairman of Ripple, had long insisted that an “invisible negative force” was systemically attacking the crypto company. The Ripple CEO admitted that he used to be skeptical about Larsen’s conspiracy theories and framing. However, the skepticism changed when the Epstein files became public . Garlinghouse noted that Larsen had specifically pointed to Joi Ito, the former head of the MIT Media Lab , as someone who had an agenda against XRP and Ripple. He noted that Gary Gensler, the former US SEC chair who led the agency’s lawsuit against Ripple, had his own ties to MIT Media Lab. The Ripple CEO said that once those connections became apparent through the Epstein file disclosures, Larsen’s long-held suspicions began to seem more credible. The general argument Ripple’s executives made was that the legal and regulatory pressure the company and the token faced was not simply a result of legitimate oversight concerns. In their view, it was likely a coordinated effort by people within institutional power to suppress XRP and to stifle Ripple’s growth.
11 Mar 2026, 22:55
Strategic Acquisition: Zendesk Acquires Forethought AI to Revolutionize Customer Service with Advanced Agentic Technology

BitcoinWorld Strategic Acquisition: Zendesk Acquires Forethought AI to Revolutionize Customer Service with Advanced Agentic Technology In a significant move to dominate the AI-powered customer service landscape, Zendesk announced its acquisition of Forethought AI on Wednesday, June 9. The strategic deal, expected to finalize by the end of March, marks a pivotal consolidation in the rapidly evolving sector of autonomous customer experience solutions. Forethought, a pioneer in agentic AI, gained early recognition by winning the prestigious Bitcoin World Battlefield competition in 2018, years before generative AI tools like ChatGPT entered the mainstream. Zendesk Forethought Acquisition: A Timeline of Innovation The acquisition represents a convergence of two trajectories in customer service technology. Zendesk, a leader in help desk software since 2007, has consistently expanded its suite through strategic purchases. Conversely, Forethought charted an ambitious path from its disruptive Battlefield debut to supporting over a billion monthly customer interactions for clients like Upwork and Datadog by 2025. This merger accelerates Zendesk’s product roadmap by more than a year, integrating Forethought’s specialized agents and self-improving AI capabilities. The financial terms remain undisclosed, consistent with Zendesk’s historical pattern for most of its dozen acquisitions. The Rise of Agentic AI in Customer Experience Forethought’s foundational vision, articulated by co-founder Deon Nicholas, was that AI could fundamentally transform customer experience. At its 2018 launch, this concept was considered bold. Today, AI agents are transforming industries globally. Forethought’s technology automates complex service interactions, moving beyond simple chatbots to systems capable of reasoning and autonomous action. The startup secured $115 million in total funding from notable investors, including NEA and Sound Ventures, validating its early market position. Its technology stack promises to enhance Zendesk’s offerings with advanced voice automation and more autonomous problem-solving capabilities. Market Context and Competitive Landscape This acquisition occurs within a private equity-owned context for Zendesk, which was taken private in a $10.2 billion deal in late 2022. The move signals a aggressive investment phase under owners Hellman & Friedman and Permira to capture market share in the AI era. Furthermore, the deal highlights the value of foundational AI research and first-mover advantage. Forethought’s early bet on agentic systems, which can control browsers and execute multi-step tasks, positioned it as a unique asset. Zendesk’s commitment includes continued support for Forethought’s existing enterprise customers while deeply integrating its tech. Implications for the Future of Customer Service The integration roadmap points toward more specialized, self-learning AI agents within Zendesk’s ecosystem. This could reduce resolution times, lower operational costs, and provide more consistent service quality. For the broader tech industry, the acquisition underscores the strategic premium placed on mature, battle-tested AI startups with proven scalability and enterprise-grade customers. It also reflects the ongoing consolidation in the SaaS and AI markets, where larger platforms seek to embed best-in-class autonomous functionality directly into their core products. Conclusion The Zendesk acquisition of Forethought AI represents a major strategic alignment in the customer service software sector. By integrating Forethought’s pioneering agentic technology, Zendesk significantly accelerates its AI capabilities, aiming to deliver more intelligent, autonomous, and efficient customer experience solutions. This deal validates the long-term vision of early AI startups and sets a new benchmark for what constitutes competitive advantage in the increasingly automated world of customer support. FAQs Q1: What does Forethought AI do? Forethought AI builds software that uses autonomous AI agents to automate complex customer service interactions, going beyond simple chatbots to handle multi-step processes and reasoning. Q2: When did Forethought AI start? The company launched in 2018 after winning the Bitcoin World Battlefield startup competition, establishing itself as an early pioneer in agentic AI well before the generative AI boom. Q3: How much funding did Forethought raise? Forethought raised a total of $115 million from investors including NEA, Sound Ventures, and Blue Cloud Ventures, with its last round being $25 million. Q4: What will happen to Forethought’s existing customers? Zendesk has stated it will continue to support Forethought’s existing customers and integrate the startup’s technology into its own AI product suite. Q5: Why is this acquisition significant for Zendesk? The acquisition accelerates Zendesk’s AI product roadmap by over a year, adding advanced agentic capabilities like self-improving AI and voice automation to its customer service platform. This post Strategic Acquisition: Zendesk Acquires Forethought AI to Revolutionize Customer Service with Advanced Agentic Technology first appeared on BitcoinWorld .
11 Mar 2026, 21:40
Chinese gov't and state-owned firms are warning employees to avoid installing OpenClaw on work devices

div]:bg-bg-000/50 [&_pre>div]:border-0.5 [&_pre>div]:border-border-400 [&_.ignore-pre-bg>div]:bg-transparent [&_.standard-markdown_:is(p,blockquote,h1,h2,h3,h4,h5,h6)]:pl-2 [&_.standard-markdown_:is(p,blockquote,ul,ol,h1,h2,h3,h4,h5,h6)]:pr-8 [&_.progressive-markdown_:is(p,blockquote,h1,h2,h3,h4,h5,h6)]:pl-2 [&_.progressive-markdown_:is(p,blockquote,ul,ol,h1,h2,h3,h4,h5,h6)]:pr-8"> _*]:min-w-0 gap-3 standard-markdown"> Chinese government bodies and state-owned companies have told employees to stay away from OpenClaw after officials raised concerns it could put sensitive data at risk. Two people familiar with the matter said the warnings went out in recent days, telling staff not to install the software on work devices. One source said employees at state-owned enterprises were told by regulators to avoid it altogether, in some cases even on personal phones and computers. The second source, from a Chinese government agency, told Reuters no outright ban had been issued at their workplace, but staff were warned about safety risks and told not to install it. The National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT/CC) also issued a security advisory noting that improper installation and use of OpenClaw agents have already led to several serious security concerns. Among the key threats highlighted is “prompt injection,” where attackers embed hidden malicious instructions in web pages that, if read by OpenClaw, could trick the system into leaking sensitive information such as system keys. CNCERT/CC also warned of “misoperation” risks, where OpenClaw may misunderstand user commands and mistakenly delete critical data, including emails or core production information. The software was built by Peter Steinberger, an Austrian developer, who put it on GitHub last November. He was hired by OpenAI last month. In China, it caught on quickly. The phrase “raising a lobster,” a reference to the app’s lobster logo, spread across Chinese social media, and the tool was soon taken up by major tech companies and some local governments. Investor enthusiasm sends stocks surging Tencent shares jumped 7.3% after the company unveiled compatible products, while startup MiniMax climbed more than 20% as investors bet on the trend. Tencent launched Workbuddy, which connects to popular Chinese office apps. ByteDance introduced ArkClaw, a cloud-based version that needs no installation. Alibaba released CoPaw, which works with messaging platforms like DingTalk and Feishu. Zhipu AI launched AutoClaw, making setup as easy as downloading a regular app. Local governments were quick to follow. Shenzhen’s Longgang district put forward a draft policy encouraging free deployment services and subsidies for developers. Wuxi’s high-tech district in Jiangsu province announced grants of between 1 million yuan and 5 million yuan, roughly $144,774 to $723,871, for businesses that put the tool to use. All of this sat under Beijing’s “AI plus” plan, which aims to push artificial intelligence into industries across the country. Users report data confusion, weak controls, and misread commands The fast uptake has not been without problems. A research center under Shenzhen’s municipal health commission held a training session last week that drew thousands of attendees. Complaints from users also came in. The tool sometimes misread instructions, had weak access controls, and left people unsure about where their data ended up. How far the restrictions will go is still unclear, including whether they will affect local subsidy programs tied to OpenClaw . Futian district in Shenzhen reportedly used the software to build an assistant for civil servants, according to state-owned Southern Daily. The smartest crypto minds already read our newsletter. Want in? Join them .






































