News
25 Mar 2026, 17:50
Data Center Ban: Sanders and AOC Launch Bold Legislative Assault on AI Infrastructure

BitcoinWorld Data Center Ban: Sanders and AOC Launch Bold Legislative Assault on AI Infrastructure WASHINGTON, D.C. — March 25, 2026 — In a dramatic legislative move, Senator Bernie Sanders (I-VT) and Representative Alexandria Ocasio-Cortez (D-NY) today introduced companion bills proposing an unprecedented federal ban on the construction of new large-scale data centers. This bold initiative directly links the physical infrastructure powering artificial intelligence to the urgent need for comprehensive AI regulation, marking a significant escalation in the political debate surrounding technology’s societal impact. Data Center Ban Targets AI’s Power-Hungry Backbone The proposed legislation, officially titled the “AI Infrastructure Responsibility Act,” would immediately halt all new data center projects with peak power loads exceeding 20 megawatts. This threshold captures nearly all facilities designed to support advanced artificial intelligence training and inference workloads. Consequently, the bill represents the most direct attempt yet to use infrastructure policy as a lever for controlling AI development. Senator Sanders’ office released a statement framing the proposal as a necessary precaution. “We cannot allow an unregulated AI arms race to proceed on the back of massive environmental and societal costs,” the statement read. The legislation mandates that the construction moratorium remain in effect until Congress enacts and the President signs comprehensive artificial intelligence regulation addressing specific concerns outlined in the bill. Mounting Backlash Against AI’s Physical Footprint The political push arrives amid growing public and expert apprehension about artificial intelligence’s rapid advancement. A March 2026 Pew Research Center poll found that 52% of U.S. adults report being “more concerned than excited” about AI’s increased use in daily life. Merely 10% of respondents said their excitement outweighed their concern. This sentiment provides crucial context for the lawmakers’ strategy. Furthermore, the bill cites warnings from prominent technology figures. These include Tesla and SpaceX CEO Elon Musk, who has repeatedly called AI “far more dangerous than nukes,” and Google DeepMind chief Demis Hassabis. Anthropic CEO Dario Amodei, OpenAI CEO Sam Altman, and AI pioneer Geoffrey Hinton have also expressed support for regulatory oversight. The legislation uses these expert concerns to justify its preemptive approach. The Core Provisions of the Proposed Ban The bill outlines several key requirements that future AI regulation must meet before the data center moratorium can be lifted. These provisions aim to address multiple dimensions of risk associated with advanced AI systems. Pre-Deployment Certification: The U.S. government must establish a review process to certify AI models before public release. Job Displacement Protections: Regulations must enact concrete safeguards against AI-driven workforce disruption. Environmental Standards: Laws must limit the carbon footprint and water usage of data infrastructure. Labor Requirements: Construction of permitted data centers must utilize union labor. Chip Export Controls: The bill seeks to prohibit exporting advanced semiconductors to nations lacking similar AI rules. Political Realities and the China Factor Despite the compelling rationale, the legislation faces steep political hurdles. The AI industry has significantly increased its lobbying and political spending. Industry groups argue that stifling infrastructure development could cede technological leadership, particularly to China. Many policymakers fear losing a perceived AI arms race, making strict regulation politically difficult. Analysts view this bill as an ambitious opening bid in a complex negotiation. “This proposal sets the outer boundaries of the debate,” said Dr. Elena Torres, a technology policy fellow at the Brookings Institution. “It forces a conversation about whether we can regulate AI without also regulating the very concrete, energy-intensive systems that make it possible.” The bill reframes AI regulation not just as code and algorithms but as steel, concrete, and megawatts. The 20-Megawatt Threshold: A Strategic Line The choice of a 20-megawatt power threshold is strategically significant. Modern AI data centers, especially those built by hyperscalers like Google, Amazon, and Microsoft, often demand 50 to 100+ megawatts. A single campus can consume as much power as a medium-sized city. The 20 MW limit effectively blocks all new facilities intended for large-scale model training while potentially allowing smaller, edge-computing installations to proceed. This distinction acknowledges different use cases for AI infrastructure. However, it squarely targets the massive, centralized facilities seen as essential for developing the next generation of frontier AI models. The policy explicitly connects the scale of computational power to the scale of potential societal risk. Environmental and Economic Impacts Data centers currently account for approximately 2% of total U.S. electricity consumption, a figure projected to triple by 2030 without intervention. Their water usage for cooling, particularly in drought-prone regions, has also sparked local opposition. The Sanders-AOC bill directly addresses these environmental externalities, framing them as inseparable from the AI governance discussion. Economically, the proposal has drawn immediate criticism from technology companies and some local governments that have courted data center investments for job creation and tax revenue. Proponents counter that the bill’s union labor requirement and focus on job displacement protections aim to create higher-quality jobs and ensure a just transition for workers affected by AI automation. Conclusion The proposal by Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez to ban large data center construction represents a fundamental shift in the AI policy landscape. By directly linking infrastructure development to regulatory outcomes, the lawmakers are challenging the industry’s growth-at-all-costs narrative. This data center ban forces a critical examination of AI’s physical and environmental costs. Whether the bill passes or not, it has successfully elevated a crucial question: Can the future of artificial intelligence be sustainable, equitable, and safe if its foundational infrastructure remains unchecked? FAQs Q1: What exactly does the proposed data center ban do? The legislation would impose a federal moratorium on constructing new data centers with a peak power demand over 20 megawatts. The ban would remain until Congress passes comprehensive AI regulation meeting specific standards on safety, jobs, environment, and labor. Q2: Why are Sanders and AOC targeting data centers instead of AI software directly? The lawmakers argue that the immense computational power required for advanced AI is a primary enabler of its risks. By restricting the infrastructure, they aim to create a powerful incentive for the industry to engage seriously on broader regulatory frameworks. Q3: How would this affect existing data centers or planned projects? The bill specifically targets new construction. Existing data centers and projects already under construction with permits would not be affected. However, any new project in the planning stages that exceeds the 20 MW threshold would be halted. Q4: What are the chances this bill becomes law? Political analysts consider it an uphill battle given industry opposition and geopolitical concerns about competition with China. However, it is seen as a significant marker that will influence the scope and seriousness of the coming debate on AI regulation. Q5: Does the bill affect all data centers or only those for AI? The 20 MW threshold is a technical measure that would impact most large-scale facilities. While the bill’s rationale focuses on AI, the power limit would also affect large data centers built for cloud computing, cryptocurrency mining, or other intensive applications, as the infrastructure is often interchangeable. This post Data Center Ban: Sanders and AOC Launch Bold Legislative Assault on AI Infrastructure first appeared on BitcoinWorld .
25 Mar 2026, 17:45
Visa Joins Canton Network as Super Validator: A Pivotal Move for Private Asset Tokenization

BitcoinWorld Visa Joins Canton Network as Super Validator: A Pivotal Move for Private Asset Tokenization In a landmark development for the convergence of traditional finance and distributed ledger technology, global payments giant Visa has officially joined the Canton Network as a super validator. This strategic move, first reported by Decrypt, positions Visa at the operational heart of a pioneering blockchain network designed specifically for the tokenization of real-world assets (RWAs). Consequently, Visa becomes the first major payments corporation to undertake such a foundational role within a privacy-focused financial blockchain, signaling a profound shift in how institutional finance may leverage decentralized infrastructure for settlement and compliance. Visa’s Role as a Canton Network Super Validator Visa will operate as one of approximately 40 super validators on the Canton Network. This role is fundamentally different from a typical node operator on a public blockchain. As a super validator, Visa will participate directly in the network’s consensus mechanism, helping to secure the blockchain, validate transactions, and maintain the integrity of the ledger. The company plans to utilize the network’s unique privacy-preserving technology to assist banking partners in constructing new stablecoin payment and settlement systems. This initiative could potentially streamline cross-border transactions and reduce counterparty risk. Furthermore, Visa’s involvement provides the Canton Network with unparalleled institutional credibility and deep expertise in global payment rails. Understanding the Canton Network’s Architecture The Canton Network is not a single, monolithic blockchain. Instead, it represents an “interoperable network” of blockchains built using Daml, a smart contract language developed by Digital Asset. This architecture allows separate applications—like those from different banks or asset managers—to run on their own dedicated sub-networks, or “domains.” Crucially, these domains can interoperate securely and privately through the Canton protocol. The network’s design specifically addresses key institutional concerns: Privacy: Transaction details and smart contract logic are only visible to permissioned participants. Scalability: The network scales by adding more parallel domains rather than burdening a single chain. Finality: Settlements are atomic and final, eliminating settlement risk. This framework makes it uniquely suited for regulated financial activities like securities settlement, syndicated loans, and insurance contracts, where data confidentiality is paramount. The Broader Trend of Real-World Asset Tokenization Visa’s entry into the Canton Network underscores the accelerating institutional adoption of blockchain for tokenizing traditional assets. Tokenization involves creating a digital representation of an asset—such as a bond, a fund interest, or real estate—on a blockchain. This process can enhance liquidity, enable fractional ownership, and automate compliance through programmable logic. Major financial institutions, including BlackRock and JPMorgan, are actively exploring this space. The involvement of a payments leader like Visa provides critical infrastructure for the “last mile” of these tokenized systems: efficient, global payment settlement. Analysts view this as a natural evolution, where Visa leverages its existing network to facilitate value transfer for a new generation of digital assets. Implications for Stablecoins and Bank Settlement Systems Visa’s stated goal to help banks build stablecoin payment systems on Canton is particularly significant. Stablecoins, which are digital currencies pegged to stable assets like the US dollar, have emerged as a potent tool for instant, borderless value transfer. However, their use in regulated wholesale finance has been limited by concerns over transparency, control, and integration with legacy systems. The Canton Network, with its privacy features and Visa’s validation, could provide a controlled environment where banks can issue or use stablecoins for interbank settlements without exposing sensitive transaction data to the public. This could lead to the development of new, 24/7 wholesale payment systems that are faster and cheaper than existing options like SWIFT. Expert Analysis on the Strategic Shift Industry observers note that Visa’s move is more than a simple technology experiment. “This is a strategic positioning play,” explains a fintech analyst at a major consultancy. “Visa is not just adopting blockchain; it is helping to govern and secure the underlying infrastructure that may one day compete with or complement its core network. By becoming a validator, Visa ensures it has a seat at the table where the future rules of finance are written.” The decision also reflects a growing consensus that permissioned, interoperable blockchains like Canton may see faster adoption for high-value institutional use cases than fully public, permissionless networks, at least in the near term. Conclusion The announcement that Visa is joining the Canton Network as a super validator marks a pivotal moment in the maturation of blockchain technology for finance. It demonstrates that leading payment giants are now moving beyond exploration to actively participating in the core governance and security of next-generation financial infrastructure. This development significantly bolsters the credibility of the Canton Network and the broader real-world asset tokenization movement. Ultimately, Visa’s involvement could accelerate the development of private, efficient, and compliant stablecoin settlement systems, potentially reshaping the backbone of global payments. FAQs Q1: What is the Canton Network? The Canton Network is an interoperable blockchain platform designed for institutional financial applications. It enables separate, private blockchains (domains) to connect and transact with each other securely, focusing on privacy, scalability, and finality for assets like securities and derivatives. Q2: What does it mean for Visa to be a “super validator”? As a super validator, Visa will help operate and secure the Canton Network by participating in its consensus mechanism. This involves validating transactions, maintaining the ledger’s integrity, and supporting the network’s overall health and performance. Q3: Why is Visa’s involvement significant? Visa is the first major global payments company to take on such a foundational role in a financial blockchain network. Its participation lends immense institutional credibility and practical expertise in payments, potentially accelerating the adoption of the network for real-world banking applications. Q4: How does the Canton Network handle privacy? The network uses a privacy-preserving architecture where transaction data and smart contract details are only shared between directly involved, permissioned parties. This allows institutions to transact and settle on a shared ledger without exposing sensitive business information to competitors or the public. Q5: What are the potential use cases for this technology? Primary use cases include the tokenization and settlement of real-world assets (bonds, funds, private equity), the creation of private interbank stablecoin payment systems, and the automation of complex multi-party financial agreements like syndicated loans and insurance contracts. This post Visa Joins Canton Network as Super Validator: A Pivotal Move for Private Asset Tokenization first appeared on BitcoinWorld .
25 Mar 2026, 17:40
MemWal AI Memory Layer: Walrus Protocol’s Revolutionary Breakthrough for Decentralized AI Agents on Sui Blockchain

BitcoinWorld MemWal AI Memory Layer: Walrus Protocol’s Revolutionary Breakthrough for Decentralized AI Agents on Sui Blockchain In a significant development for decentralized artificial intelligence, the Walrus storage protocol has unveiled MemWal, a groundbreaking memory layer specifically designed for AI agents operating on the Sui blockchain network. This announcement, made via the project’s official X account on March 15, 2025, represents a major advancement in how AI systems store, recall, and share information within decentralized environments. The MemWal technology addresses persistent challenges in blockchain-based data storage while enabling AI agents to maintain permanent memory of conversational and reasoning processes. MemWal AI Memory Layer: Technical Architecture and Innovation The MemWal memory layer introduces a novel approach to decentralized data persistence for artificial intelligence systems. Unlike traditional storage solutions that treat AI agent data as static information, MemWal creates dynamic memory structures that evolve with agent interactions. This technology enables AI agents to retain context across multiple sessions, creating continuity in conversations and decision-making processes. The system operates on Walrus’s existing infrastructure, which leverages the Sui network’s high-throughput capabilities and parallel transaction processing. MemWal’s architecture incorporates several key innovations. First, it implements a hierarchical memory structure that separates short-term working memory from long-term persistent storage. Second, it utilizes cryptographic techniques to ensure memory integrity while maintaining privacy controls. Third, the system includes permissioning mechanisms that allow selective memory sharing between authorized AI agents. These technical features collectively address what developers have called the “memory bottleneck” in decentralized AI systems. Comparative Analysis: MemWal vs. Traditional AI Memory Systems Traditional centralized AI systems typically store memory in proprietary databases controlled by single entities. This approach creates several limitations, including vendor lock-in, single points of failure, and privacy concerns. In contrast, MemWal’s decentralized architecture distributes memory storage across the Sui network, eliminating central control points. The table below illustrates key differences: Feature Traditional AI Memory MemWal Decentralized Memory Storage Control Centralized entity Distributed network Data Persistence Vendor-dependent Blockchain-guaranteed Access Control Proprietary systems Cryptographic permissions Interoperability Limited to platform Cross-agent compatible Auditability Opaque processes Transparent verification Sui Blockchain Infrastructure: The Foundation for Advanced AI Memory The Sui network provides essential infrastructure that makes MemWal’s capabilities possible. Sui’s unique architecture, developed by former Meta engineers, offers several advantages for AI applications. Its object-centric data model aligns naturally with how AI agents process and store information. Additionally, Sui’s parallel transaction execution enables multiple AI agents to access and update memory simultaneously without creating bottlenecks. This capability is crucial for applications requiring real-time collaboration between artificial intelligence systems. Sui’s consensus mechanism, based on the Narwhal and Bullshark protocols, ensures high throughput and low latency for memory operations. These performance characteristics are essential for AI agents that require rapid memory recall during complex reasoning tasks. Furthermore, Sui’s Move programming language provides enhanced security features that protect memory data from unauthorized access or manipulation. The combination of these technical elements creates a robust foundation for MemWal’s memory layer functionality. Real-World Applications and Use Cases MemWal enables several practical applications that were previously challenging in decentralized environments. Multiple AI agents can now collaborate on complex problems while maintaining shared context and reasoning history. For example, financial analysis agents could work together on market predictions, with each agent contributing specialized knowledge while accessing a common memory of previous analyses. Similarly, healthcare diagnostic agents could share patient interaction histories while maintaining privacy through selective memory permissions. The technology also supports educational applications where AI tutors maintain longitudinal learning profiles across multiple sessions. Research collaboration represents another promising use case, with AI research assistants sharing literature reviews and experimental data through controlled memory access. These applications demonstrate MemWal’s potential to transform how artificial intelligence systems interact and collaborate in decentralized ecosystems. Walrus Protocol Evolution: From Storage to Intelligent Memory Walrus (WAL) has evolved significantly since its initial launch as a storage protocol on the Sui network. Originally focused on decentralized file storage similar to traditional solutions like IPFS or Arweave, the protocol has progressively incorporated more sophisticated data management capabilities. The introduction of MemWal represents a strategic pivot toward intelligent storage solutions specifically designed for artificial intelligence applications. This evolution reflects broader industry trends toward specialized infrastructure for AI development. The Walrus team has emphasized that MemWal is not merely an extension of existing storage capabilities but represents a fundamentally new approach to data persistence. By treating memory as a first-class citizen in the storage hierarchy, the protocol enables new types of AI applications that were previously impractical on decentralized networks. This development aligns with growing demand for AI infrastructure that combines the benefits of blockchain technology with advanced artificial intelligence capabilities. Technical Implementation and Developer Integration Developers can integrate MemWal into their AI applications through standardized APIs that abstract the underlying complexity of the memory layer. The implementation includes several key components: Memory Management SDK: Provides tools for creating, updating, and querying agent memories Permission Framework: Enables fine-grained control over memory access and sharing Consistency Guarantees: Ensures memory integrity across distributed nodes Query Optimization: Accelerates memory retrieval for time-sensitive applications These components work together to provide a comprehensive memory solution for AI developers. The system also includes monitoring and analytics tools that help developers optimize memory usage patterns and identify performance bottlenecks. This developer-focused approach aims to accelerate adoption by reducing integration complexity while maintaining robust functionality. Industry Context and Competitive Landscape The announcement of MemWal occurs within a rapidly evolving landscape of decentralized AI infrastructure. Several projects are exploring similar territory, though with different technical approaches and blockchain foundations. Comparative analysis reveals that MemWal’s specific focus on persistent conversational memory represents a unique positioning within this competitive space. The integration with Sui’s high-performance blockchain provides additional differentiation from solutions built on other networks. Industry experts note that successful AI memory solutions must address several critical challenges. These include balancing privacy with collaboration, ensuring performance at scale, and maintaining cost efficiency. Early technical documentation suggests that MemWal’s architecture has been designed with these considerations in mind. The protocol’s economic model, which utilizes the WAL token for memory operations, aims to create sustainable incentives for network participants while keeping costs predictable for developers. Future Development Roadmap and Research Directions The Walrus team has outlined an ambitious development roadmap for MemWal following its initial release. Planned enhancements include advanced compression algorithms to reduce storage costs, improved indexing for faster memory retrieval, and expanded support for different memory types beyond conversational data. Research initiatives focus on several frontier areas, including episodic memory for sequential decision-making and semantic memory for conceptual understanding. Long-term vision documents describe a future where MemWal evolves into a comprehensive memory ecosystem supporting diverse AI applications. This ecosystem would include specialized memory modules for different domains, standardized interfaces for memory interoperability, and governance mechanisms for community-driven development. These plans reflect the project’s commitment to continuous innovation in decentralized AI infrastructure. Conclusion The MemWal AI memory layer represents a significant advancement in decentralized artificial intelligence infrastructure on the Sui blockchain. By enabling permanent memory storage and sharing for AI agents, Walrus protocol addresses critical challenges in blockchain-based AI development. This technology facilitates new forms of multi-agent collaboration while maintaining the security and transparency benefits of decentralized systems. As artificial intelligence continues to evolve, solutions like MemWal will play increasingly important roles in creating robust, scalable, and collaborative AI ecosystems. The successful implementation of this memory layer could accelerate adoption of decentralized AI applications across multiple industries. FAQs Q1: What exactly is MemWal and how does it differ from regular data storage? MemWal is a specialized memory layer designed specifically for AI agents, enabling them to permanently store and recall conversational and reasoning processes. Unlike regular data storage that treats information as static files, MemWal creates dynamic memory structures that evolve with agent interactions and support context preservation across sessions. Q2: Why is the Sui blockchain particularly suitable for MemWal’s implementation? Sui’s object-centric data model aligns naturally with how AI agents process information, while its parallel transaction execution enables multiple agents to access memory simultaneously without bottlenecks. The network’s high throughput and low latency characteristics are essential for AI applications requiring rapid memory operations. Q3: Can multiple AI agents truly collaborate using MemWal, and how does this work technically? Yes, MemWal enables simultaneous collaboration through its permission framework and shared memory structures. Technically, agents can access common memory spaces while maintaining individual private memories, with cryptographic controls governing what information is shared and under what conditions. Q4: What are the main practical applications for this technology in real-world scenarios? Practical applications include collaborative financial analysis systems, healthcare diagnostic networks with shared patient histories, educational AI tutors with longitudinal learning profiles, and research collaboration platforms where AI assistants share literature reviews and experimental data. Q5: How does MemWal address privacy concerns while enabling memory sharing between AI agents? The system implements fine-grained permission controls using cryptographic techniques, allowing agents to share specific memory elements while keeping other information private. This selective sharing approach balances collaboration needs with privacy requirements through transparent and verifiable access controls. This post MemWal AI Memory Layer: Walrus Protocol’s Revolutionary Breakthrough for Decentralized AI Agents on Sui Blockchain first appeared on BitcoinWorld .
25 Mar 2026, 16:50
Google Lyria 3 Pro Unleashes Revolutionary 3-Minute AI Music Generation for Creators

BitcoinWorld Google Lyria 3 Pro Unleashes Revolutionary 3-Minute AI Music Generation for Creators Google has dramatically expanded the creative possibilities of artificial intelligence with the official launch of its Lyria 3 Pro music generation model, a powerful upgrade that enables users to produce complete musical tracks up to three minutes in length. This announcement, made on Wednesday, represents a significant leap from the previous Lyria 3 model’s 30-second limit and arrives just one month after its predecessor’s debut. The new model promises superior creative control and deeper structural understanding, fundamentally changing how creators, from hobbyists to enterprise professionals, approach AI-assisted music production. Google Lyria 3 Pro: A Quantum Leap in AI Music Duration and Control The core advancement of Lyria 3 Pro lies in its extended generation capability. By increasing track length tenfold, Google directly addresses a primary limitation of earlier AI music tools. Consequently, creators can now envision and produce full song structures rather than short loops or ideas. Furthermore, the model introduces granular control over musical architecture. Users can specify distinct sections within their prompts, such as intros, verses, choruses, and bridges. This structural awareness allows for more coherent and professionally arranged compositions. Google emphasizes that the model’s training utilized data from its partners and permissible data from YouTube and Google. The company also explicitly states that Lyria 3 Pro does not mimic specific artists. However, if a user references an artist in a prompt, the system will take “broad inspiration” from that artist’s style to generate a unique track. All outputs from both Lyria 3 and Lyria 3 Pro are watermarked with SynthID, an inaudible identifier that denotes AI generation, addressing growing industry concerns about transparency. Strategic Integration Across Google’s Ecosystem Google is deploying Lyria 3 Pro across multiple strategic fronts, embedding AI music generation deeply into its product suite. The primary consumer-facing access point remains the Gemini app, where music generation first appeared with Lyria 3. However, access to the Pro model will be restricted to paid subscribers, creating a clear tiered service model. This move signals Google’s intent to monetize advanced AI creative tools directly. Beyond Gemini, the model is rolling out to Google Vids, the company’s AI-powered video editing application, enabling users to score their videos with custom AI-generated soundtracks. Simultaneously, Lyria 3 Pro is being integrated into ProducerAI, a generative AI-powered music production tool that Google acquired just last month. This rapid integration showcases a concerted strategy to build a comprehensive, AI-native creative suite. The enterprise sector represents another major focus. Google is adding Lyria 3 Pro’s capabilities to its Vertex AI platform (currently in public preview), the Gemini API, and AI Studio. This allows businesses and developers to build custom applications, automate content creation, and explore new use cases for branded or functional music. The Broader Industry Context of AI Music Google’s announcement arrives amidst heightened activity and concern within the music and streaming industries regarding AI-generated content. Earlier this week, Spotify released new tools empowering artists to review songs released under their name, a direct response to prevent misattribution by “AI slop” creators. Similarly, Deezer has launched technology to help any streaming service identify AI-generated music. These developments highlight an industry scrambling to establish norms, protect artists, and provide clarity to listeners. The rapid iteration from Lyria 3 to Lyria 3 Pro within a single month also underscores the intense competition in the generative AI space. Companies are racing to improve model capabilities, reduce limitations, and capture market share among both professional creators and casual users. Google’s ability to offer significantly longer, structurally coherent music generation positions it as a formidable player against other AI music startups and tech giants exploring similar technology. Technical and Creative Implications for Users For musicians and content creators, Lyria 3 Pro offers a new tier of collaborative tool. The extended length transforms the AI from a sketchpad into a potential co-writer for full song ideas. The ability to dictate song structure is particularly noteworthy, as it moves AI music generation closer to traditional compositional workflows. Users are no longer merely generating a texture or loop; they are architecting a complete piece with a defined narrative arc. The table below summarizes the key differences between Lyria 3 and the new Lyria 3 Pro model: Feature Lyria 3 Lyria 3 Pro Maximum Track Length 30 seconds 3 minutes Structural Control Basic Advanced (Intros, Verses, Choruses, Bridges) Primary Access Gemini app (potentially broader access) Gemini app (Paid Tier), Google Vids, ProducerAI, Enterprise APIs Output Watermark SynthID SynthID The monetization strategy, gating the Pro model behind a paywall, is a critical development. It establishes a precedent for how advanced generative AI features may be commercialized, moving beyond simple subscription models for chatbots to specialized tools for creative professionals. This could shape how other companies price and package their own AI creative suites. Conclusion Google’s launch of the Lyria 3 Pro music generation model marks a pivotal moment in the evolution of AI-assisted creativity. By solving the critical problem of length and introducing sophisticated structural control, Google has transformed its AI from a novelty into a potent professional tool. The strategic deployment across consumer apps like Gemini and Google Vids, alongside powerful enterprise APIs, demonstrates a comprehensive vision for AI’s role in the future of media production. As the industry grapples with the ethical and practical implications of AI-generated content, tools like Lyria 3 Pro, coupled with identifiers like SynthID, represent a path forward that balances explosive creative potential with necessary transparency and artist consideration. FAQs Q1: What is the main improvement in Google Lyria 3 Pro over Lyria 3? The most significant upgrade is the ability to generate music tracks up to three minutes long, a tenfold increase from Lyria 3’s 30-second limit. Additionally, it offers much finer creative control, allowing users to specify song sections like verses and choruses. Q2: Where can I access the Lyria 3 Pro music generation model? Access is rolling out to paid subscribers within the Gemini app. It is also being integrated into Google Vids for video scoring and ProducerAI, Google’s dedicated music production tool. Enterprise developers can access it via Vertex AI, the Gemini API, and AI Studio. Q3: Does Lyria 3 Pro copy or mimic specific artists? Google states the model does not mimic artists. However, if a user specifies an artist in the prompt, the system will take “broad inspiration” from that artist’s style to generate an original track, not a direct copy. Q4: How does Google identify music created with its AI models? All tracks generated by Lyria 3 and Lyria 3 Pro are marked with SynthID, a digital watermark that is inaudible to listeners but denotes the track was AI-generated. This is part of Google’s transparency efforts. Q5: Why is the timing of this release significant in the broader AI music industry? The release comes as streaming services like Spotify and Deezer are actively launching tools to identify and manage AI-generated content. Google’s rapid model iteration and focus on transparency (via SynthID) position it as a responsible actor in a rapidly evolving and sometimes contentious field. This post Google Lyria 3 Pro Unleashes Revolutionary 3-Minute AI Music Generation for Creators first appeared on BitcoinWorld .
25 Mar 2026, 16:30
Privacy Stablecoin Pioneer Payy Secures $6M to Power Confidential Institutional Transactions

BitcoinWorld Privacy Stablecoin Pioneer Payy Secures $6M to Power Confidential Institutional Transactions In a significant move for financial privacy on blockchain, Payy, a developer specializing in Zero-Knowledge (ZK) proof-based privacy stablecoins, has successfully raised $6 million in a seed funding round. This capital injection, led by FirstMark Capital and reported by The Block, signals growing investor confidence in privacy-focused infrastructure for institutional cryptocurrency adoption. The funding round, which also included participation from Robot Ventures and DBA Crypto, elevates Payy’s total raised capital to $8 million. Consequently, the company plans to accelerate the development of its proprietary Ethereum Layer 2 network, designed explicitly to shield sensitive institutional financial data from public exposure. Payy’s Privacy Stablecoin Vision and Funding Details The recent $6 million seed round represents a pivotal step for Payy’s ambitious roadmap. Significantly, FirstMark Capital, a venture firm with a history of backing transformative tech companies, led the investment. Moreover, Robot Ventures and DBA Crypto provided additional support, highlighting cross-sector interest. This capital will primarily fund team expansion and initiatives to attract institutional clients. Payy’s core mission addresses a critical pain point in decentralized finance: the lack of transactional privacy for enterprises. Currently, every transaction on a public ledger like Ethereum is visible, exposing details such as: Transaction histories and counterparty information Wallet balances and asset holdings Trading positions and strategic moves For institutions, this transparency creates operational risks and competitive disadvantages. Therefore, Payy’s solution arrives at a crucial juncture in blockchain’s enterprise adoption curve. The Technology Behind Confidential Transactions Payy’s technological foundation rests on Zero-Knowledge proofs , a cryptographic method that allows one party to prove to another that a statement is true without revealing any information beyond the validity of the statement itself. In practice, this means a transaction can be verified as valid and settled on a blockchain without disclosing the sender, receiver, or amount. This approach differs fundamentally from mixing services or privacy coins by integrating privacy at the protocol level for stable assets. Payy is constructing its own Ethereum Layer 2 network , the Payy Network. This dedicated rollup will batch transactions off the main Ethereum chain before settling final proofs, ensuring: Scalability through reduced mainnet congestion Cost-efficiency with lower transaction fees Full compatibility with the Ethereum Virtual Machine (EVM) The network will specifically cater to institutions, enabling them to transact stablecoins with the same confidentiality expected in traditional finance. Institutional Demand for Financial Privacy The drive for solutions like Payy’s stems from clear market demand. Traditional financial entities, including hedge funds, family offices, and corporate treasuries, are increasingly exploring digital assets. However, public ledger transparency remains a major barrier. A transaction that reveals a large institution’s trading strategy or treasury management move could lead to front-running or market manipulation. Furthermore, regulatory compliance often requires confidentiality during the execution of large orders. By leveraging ZK-proof technology, Payy aims to provide a compliant pathway for these actors to engage with decentralized finance (DeFi) and digital asset markets without sacrificing operational security. Roadmap and Current Product Suite Payy has outlined a clear development timeline, moving from current services to its flagship network launch. Presently, the company offers a non-custodial wallet and credit card services , establishing its user-facing foundation. The imminent next phase is the launch of the Payy Network testnet, scheduled for next month. This testnet will allow developers and early institutional partners to experiment with private transactions in a controlled environment. Following successful testing, the mainnet launch is targeted for this summer. This progression from product to protocol illustrates a strategic build-out, using initial services to refine technology and understand user needs before deploying the core institutional network. Payy Development Timeline and Funding Phase Detail Timeline Current Services Non-custodial wallet & credit card Live Seed Funding $6M raised, total $8M Completed Testnet Launch Payy Network (Ethereum L2) Next Month Mainnet Launch Full institutional network go-live Target: This Summer The Competitive Landscape for Privacy Stablecoins Payy enters a nascent but competitive field. The concept of privacy-preserving stablecoins has gained traction alongside broader regulatory scrutiny of anonymous digital assets. Other projects explore similar territory, often using different technological implementations like confidential transactions or trusted execution environments. However, Payy’s focus on a dedicated Ethereum Layer 2 for institutions creates a distinct niche. This approach potentially offers better scalability and integration than privacy features bolted onto existing Layer 1 chains. The involvement of established venture capital firms like FirstMark Capital provides not just capital but also validation and strategic networking, which could be crucial for onboarding the first wave of institutional clients. Broader Implications for Ethereum and DeFi The development of the Payy Network contributes to the evolving Ethereum Layer 2 ecosystem . Each new specialized rollup adds to the network’s overall capacity and utility. A privacy-focused L2 could unlock new DeFi primitives and institutional products that were previously impractical. For example, confidential decentralized exchanges (DEXs) or private lending pools could emerge, combining DeFi’s efficiency with traditional finance’s discretion. This evolution supports Ethereum’s vision as a settlement layer for diverse, application-specific networks. Ultimately, successful adoption of Payy’s technology could demonstrate a sustainable model for privacy in a regulated, institutional crypto economy. Conclusion Payy’s $6 million seed funding round marks a critical advancement for privacy-enhancing technology in cryptocurrency. By developing a Zero-Knowledge proof-based Ethereum Layer 2 network, Payy directly addresses the confidentiality needs of institutional players, a key requirement for broader digital asset adoption. The capital will fuel team growth and client acquisition ahead of the network’s testnet and mainnet launches this year. As the blockchain industry matures, solutions that bridge the gap between transparent ledgers and private financial operations will become increasingly vital. Payy’s focused approach on privacy stablecoins positions it at the forefront of this essential convergence between institutional finance and decentralized technology. FAQs Q1: What is a privacy stablecoin? A privacy stablecoin is a digital asset pegged to a stable value (like the US dollar) that incorporates cryptographic technology, such as Zero-Knowledge proofs, to conceal transaction details like the sender, receiver, and amount while still operating on a blockchain. Q2: How does Payy’s technology work? Payy is building an Ethereum Layer 2 network that uses Zero-Knowledge rollups. This technology batches transactions off-chain and generates a cryptographic proof of their validity, which is then posted to Ethereum. The process verifies transactions without revealing sensitive data on the public ledger. Q3: Who are the investors in Payy’s seed round? The $6 million seed round was led by FirstMark Capital, with additional participation from venture firms Robot Ventures and DBA Crypto. Q4: What is the difference between Payy and privacy coins like Monero? While both prioritize privacy, they differ in asset type and mechanism. Privacy coins like Monero are native volatile assets with built-in privacy. Payy focuses on stablecoins (price-stable assets) and uses a dedicated Layer 2 network with ZK-proofs, aiming primarily at institutional rather than retail users. Q5: When will the Payy Network be available? The Payy Network testnet is scheduled to launch next month for developers and early testers. The mainnet, intended for full institutional use, is targeted for launch in the summer of this year. This post Privacy Stablecoin Pioneer Payy Secures $6M to Power Confidential Institutional Transactions first appeared on BitcoinWorld .
25 Mar 2026, 15:51
Franklin Templeton and Ondo Finance Forge Ahead with Blockchain Investment Products

Franklin Templeton and Ondo Finance partner to tokenize investment products using blockchain technology. The initiative offers simplified access to stocks and ETFs via digital wallets, bypassing intermediaries. Continue Reading: Franklin Templeton and Ondo Finance Forge Ahead with Blockchain Investment Products The post Franklin Templeton and Ondo Finance Forge Ahead with Blockchain Investment Products appeared first on COINTURK NEWS .





































