News
12 Feb 2026, 13:00
Crypto Enters Thailand’s Capital Markets After Regulatory Approval

Thailand has quietly moved a big step closer to making crypto part of its money markets. The Cabinet has given the green light to let cryptocurrencies serve as the underlying assets for regulated products such as futures and options. This opens the door for mainstream trading that is tied to real legal rules and cleared through licensed systems. Regulators Set Rules Based on reports , Thailand’s Securities and Exchange Commission will write the detailed rules next. Those rules will say how exchanges must operate, how trades are cleared, and what kinds of risk controls firms must put in place. Exchanges and banks will need licenses. Custody standards will be tightened. Market makers and institutional investors are already talking to local firms about possible listings and clearing setups. Some work will be done by trading venues; other work will be done by third parties that handle settlement. Tokenized Bonds And Tax Moves Reports have disclosed earlier projects that helped pave the way. The government introduced tokenized government bonds, known as G-Tokens, which were offered through licensed digital trading platforms in 2025. That experiment showed how public debt can sit on a blockchain while still being issued under normal law. At the same time, Reports say a temporary tax break was offered to encourage on-shore crypto trading — a five-year capital gains tax exemption running from 2025 to 2029 for trades on approved platforms. Stablecoins such as USDT and USDC were added to the approved list to ease trading and settlement. Market Reaction And Institutional Interest According to market watchers, the move drew fast interest from regional fund managers and some global trading desks. There is talk of creating Bitcoin futures and possibly ETFs that link to regulated contracts. Trading firms say the main pull is clearer rules and a legal route for hedging exposure. Liquidity providers see a chance to offer more tools to investors, and some exchanges have already started building product designs. Volatility remains a concern, and many firms are cautious about running big positions until the clearing rules are final. Concerns are being raised about custody, fraud, and links to money laundering. Regulators intend to require robust know-your-customer checks and strict audit trails. Leverage levels will likely be limited at first. Margining rules are expected to be strict so that a sudden price move does not cascade through the system. Many observers point out that bringing crypto into regulated markets can help manage these risks — if rules are enforced. Featured image from Unsplash, chart from TradingView
12 Feb 2026, 13:00
Pentagon pushes OpenAI and Anthropic for fewer restrictions on classified military AI tools

The Pentagon is putting real pressure on major artificial intelligence companies to give the U.S. military access to their tools inside classified systems. Officials aren’t just asking for basic access. They want these AI models to work without all the usual limits companies place on users. During a White House meeting on Tuesday, Emil Michael, the Pentagon’s Chief Technology Officer, told tech leaders the military wants these AI models running across both classified and unclassified networks. An official close to the talks allegedly said the government is now set on getting what it calls “frontier AI capabilities” into every level of military use. Pentagon demands access without restrictions across secure networks This push is part of bigger talks about how AI will be used in future combat. Wars are already being shaped by drone swarms, robots, and nonstop cyberattacks. The Pentagon doesn’t want to play catch-up while the tech world draws lines around what’s allowed. Right now, most companies working with the military are offering watered-down versions of their models. These only run on open, unclassified systems used for admin work. Anthropic is the one exception. Claude, its chatbot, can be used in some classified settings, but only through third-party platforms. Even then, government users still have to follow Anthropic’s rules. What the Pentagon wants is direct access inside highly sensitive classified networks. These systems are used for stuff like planning missions or locking in targets. It’s not clear when or how chatbots like Claude or ChatGPT would be installed on those networks, but that’s the goal. Officials believe AI can help process huge amounts of data and feed that to decision-makers fast. But if those tools generate false info, and they do, people could die. Researchers have warned about exactly that. OpenAI made a deal with the Pentagon this week. ChatGPT will now be used on an unclassified network called genai.mil. That network already reaches over 3 million employees across the Defense Department. As part of the deal, OpenAI removed a lot of its normal usage limits. There are still some guardrails in place, but the Pentagon got most of what it wanted. A company spokesperson said any expansion to classified use would need a new deal. Google and Elon Musk’s xAI have done similar deals in the past. AI researchers are quitting and calling out the risks Talks with Anthropic haven’t been as easy. Leaders at the company told the Pentagon they don’t want their tech used for automatic targeting or spying on people inside the U.S. Even though Claude is being used already in some national security missions, the company’s executives are pushing back. In a statement, a spokesperson said:- “Anthropic is committed to protecting America’s lead in AI and helping the U.S. government counter foreign threats by giving our warfighters access to the most advanced AI capabilities.” They said Claude is already in use, and the company is still working closely with what’s now called the Department of War. President Donald Trump recently ordered the Defense Department to adopt that name, but Congress still needs to approve it. While all of this is happening, a bunch of researchers at these companies are walking out. One of Anthropic’s top safeguards researchers said, “The world is in peril,” as he quit. A researcher at OpenAI also left, saying the tech has “a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.” Some of the people leaving aren’t doing it quietly. They’re warning that things are moving too fast and the risks are being ignored. Zoë Hitzig, who worked at OpenAI for two years, quit this week. In an essay, she said she had “deep reservations” about how the company is planning to bring in ads. She also said ChatGPT stores people’s private data, things like “medical fears, their relationship problems, their beliefs about God and the afterlife.” She said that’s a huge problem because people trust the chatbot and don’t think it has any hidden motives. Around the same time, tech site Platformer reported that OpenAI got rid of its mission alignment team. That group was set up in 2024 to make sure the company’s goal of building AI that helps all of humanity actually meant something. The smartest crypto minds already read our newsletter. Want in? Join them .
12 Feb 2026, 12:49
US officials frame tech push as counterweight to Beijing’s regional influence

The US is pressing AI funding, fisheries technology, and maritime surveillance at APEC meetings in southern China, positioning American systems as partners seek alternatives amid the US-China rivalry shaping the region’s technology and security agenda. The push comes as Washington promotes exports of artificial intelligence tools and ocean-monitoring technologies to Asia-Pacific economies. US advances AI funding through APEC The US’ senior representative to APEC (Asia-Pacific Economic Cooperation) Casey Mace has announced that it will establish a $20 million fund to help APEC partner nations adopt American AI technologies. This initiative fits into a larger strategy of demonstrating US leadership in new technologies prior to key diplomatic events later this year, such as the hosting of APEC leaders by China in Shenzhen, China. The American approach was reinforced over the past year through the signing of an executive order by President Donald Trump to promote “American AI technology, create responsible standards for AI, and develop governance models for internationally adopting” American artificial intelligence technologies and how to use them. The United States government argues that their approach is based on transparent standards and supports innovation driven by market forces. Maritime AI issues date back as far as 2023 when the governments of Australia, the United Kingdom, and the United States joined forces to deploy advanced AI technology aimed at bolstering maritime security in the Asia Pacific region. This collaborative effort at the time signified a significant leap forward in the development of AI-powered maritime surveillance systems. Challenging China’s AI model US representatives have utilized discussions to highlight their differing views compared to China. According to a spokesperson from the US Department of State, China promotes the ideas of the Chinese Communist Party (CCP) and makes use of AI technology as a tool for their censorship, as well as having an oppressive approach to AI governance. “China’s AI technology promotes CCP propaganda and censorship, while its vision for AI governance seeks to enable authoritarian repression.” US representative. China denies these claims and instead states that they support the cooperative efforts of the world relating to AI governance and how to properly use AI in an effective manner. In addition, China continues to spend large amounts of money to reduce its technological difference relative to the United States, even if some restrictions prevent them from being able to close that difference in some technological fields, such as the manufacture of advanced chips. The initiative is also targeting illegal fishing with technology. China’s fishing fleet is the largest in the Pacific and creates challenges for smaller coastal nations trying to enforce fisheries regulations. Ruth Perry, Acting Principal Deputy Assistant Secretary of State for Oceans and International Environmental and Scientific Affairs said, “numerous countries are adversely affected, and China’s distant water fleet is the common denominator and cannot be ignored in the Pacific”. US companies are said to be creating technologies to combat these issues through satellite tracking of fishing vessels, AI based analytical tools, acoustic detection systems, and sensor equipped ocean buoys. Perry stated that “illegal fishing practices are often associated with human trafficking, forced labour, and smuggling,” referencing concerns about China’s new fishery laws being proposed in May 2026. “China seems to be saying all the right things, and we will be looking for them to follow through with actions,” said Perry. Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.
12 Feb 2026, 12:40
SoftBank misses ¥336.7 billion estimate despite ¥248.6 billion profit quarter

SoftBank reported a quarterly profit of ¥248.6 billion, falling short of the expected ¥336.7 billion. Even though the number is still big, the miss raised eyebrows. The group saw gains in some places but got dragged down in others. It wasn’t a clean win. The performance looked all over the place, despite what the headline profit might suggest. For the nine-month period from April to December, net sales hit ¥5.72 trillion, up 7.9% from the ¥5.3 trillion posted a year ago. Income before tax jumped 228% to ¥4.17 trillion, while net income soared nearly 400%, landing at ¥3.17 trillion versus ¥636.2 billion the year before. Gains on investments doubled from ¥2.17 trillion to ¥4.22 trillion, but the group’s investment business outside of Vision Funds collapsed, dropping 91.9% to just ¥163.4 billion. That segment alone used to bring in over ¥2 trillion. Vision Funds returns after last year’s collapse The SoftBank Vision Funds unit staged a massive turnaround, logging a ¥3.6 trillion gain, a major bounce from the ¥309.9 billion loss reported a year earlier. The Vision Funds are where most of the OpenAI exposure sits. SoftBank has been piling billions into it throughout 2025, aiming to ride the AI wave. Source: SoftBank The group officially committed $40 billion to OpenAI in March 2025, though $30 billion of that was SoftBank’s own exposure. The money was funneled through SVF2, its second Vision Fund. In April 2025, the first $10 billion round closed, with $7.5 billion of it from SVF2. Then, by December, a second $31 billion round wrapped up, with SVF2 throwing in another $22.5 billion. Altogether, SoftBank’s total stake hit $34.6 billion, giving it about 11% ownership in OpenAI . The first chunk went into OpenAI Global LLC, while the second landed in OpenAI Group PBC, following a recapitalization completed in October. The pre-money valuation was $260 billion, and co-investors chipped in another $11 billion, bringing the total syndication to $41 billion. SoftBank now holds its OpenAI shares directly through SVF2. Asset sales and new bets show where the money is going Behind the scenes, SoftBank’s been selling off other holdings to keep the OpenAI checks flowing. Between June and December, it dumped $12.73 billion worth of T-Mobile shares. It also sold its full Nvidia stake in October for $5.83 billion, despite Nvidia’s role in AI chips. The company has also been borrowing against its Arm stake and other holdings to stay liquid. It hasn’t stopped spending though. In December 2025, SoftBank said it would buy DigitalBridge, a data center investment firm based in Florida, for $4 billion, including debt. A couple of months earlier, in October, it agreed to acquire ABB’s robotics division for $5.4 billion. Both moves were aimed at building more exposure to AI-linked infrastructure. SoftBank’s CFO Yoshimitsu Goto said this week that 60% of the company’s assets are now AI or ASI-related, referencing artificial superintelligence, which founder Masayoshi Son once claimed would be “10,000 times smarter than humans.” The focus on ASI is no longer just talk. It’s clearly where the company is betting everything. Goto was pressed multiple times on OpenAI during the company’s earnings call. When asked why SoftBank keeps doubling down on the AI firm even after some rough patches, he replied: “We assume OpenAI will be able to lead this industry and this era, and we are quite convinced. So that’s why we are making an investment in this company.” The company believes OpenAI is just getting started with monetizing its tech. A person close to the matter said future revenue may come from enterprise deals, hardware, and ads, even though the company isn’t profitable yet. SoftBank shares are up 9.5% in 2026 so far, after almost doubling in 2025. Investors got another reason to buy this week after Prime Minister Takaichi Sanae’s win over the weekend. Takaichi-san’s push for bigger spending in AI and semiconductors gave markets a jolt. Still, the big question now is whether SoftBank can keep financing its AI push without sinking its balance sheet. Claim your free seat in an exclusive crypto trading community - limited to 1,000 members.
12 Feb 2026, 12:30
XRP Community Day Recap: The 7 Most Bullish Takeaways

Ripple used XRP Community Day to tighten its message: XRP is not an accessory to the business, it’s the organizing principle and the company is positioning its product stack, regulatory posture, and institutional roadmap around that premise. XRP Community Day Highlights CEO Brad Garlinghouse went straight for the ceiling. “There will be a trillion dollar crypto company, I don’t doubt that for a second,” he said. “I think Ripple has the opportunity to be that company, and maybe there’ll be more than one.” The framing matters because it’s not a token price call — it’s a scale argument about where regulated rails, liquidity, and enterprise distribution could concentrate as XRP plugs further into legacy finance. Policy was the second major pillar. Garlinghouse put odds on the table for US legislation, predicting a “75%” chance the CLARITY Act will be “very close to getting signed by the end of April.” Related Reading: XRP Positioned For Major Structure Shift As Price Tests Critical Level Garlinghouse also tried to reconcile market volatility with institutional appetite, pointing to ETF flow behavior during a rough tape. “I believe in a multi-chain world. Even last week, when there was massive carnage going on in the market, there was positive XRP ETF inflows of $30M or $40M,” he said. “Public markets are keen to invest in crypto. Customers want it.” The compliance posture was framed less as defensive and more as a competitive moat by Garlinghouse. “We want to be the most regulated, compliant, because we’re focused on institutional flows—that is the priority,” Garlinghouse said. “The OCC charter makes it very clear that RLUSD is a leader under the GENIUS Act, it cements our leading position.” In Ripple’s telling, regulatory credentials aren’t a cost center; they’re how you win mandates, counterparties, and distribution in the parts of the market that actually move size. He also hinted at some major progress on the Fed Masters Account. “Now, there’s been a lot of speculation about what we could do in the future,” Garlinghouse said. “There’s been some commentary about a Fed Master Account, which we do think is compelling. And there’s things we may do in the future that I’m not gonna go into today.” He then anchored the point in trajectory rather than rumor: conditional OCC approval and engagement, he said, represent “massive progress relative to where we started this journey.” Related Reading: Glassnode: XRP Is Back In Its 2021-2022 Playbook As SOPR Drops Sub 1 On XRP itself, Garlinghouse delivered the cleanest thesis statement of the event: “XRP is the north star for Ripple. It’s our purpose.” He tied Ripple Payments, Ripple Prime, Ripple Treasury, custody, and RLUSD to a single objective: “how we can drive utility, trust, liquidity around XRPL.” President Monica Long expanded that into an execution roadmap: “We’re rewinding the tape back to the founding of the company, like XRP and the Ledger are our reason for being,” she said. “So we call it our North Star, like that this is kind of what guides us in a lot of our product strategy and decision-making.” From there, she outlined three institutional-flavored pushes: bringing more licensed payments flow onto the XRPL DEX, a “payments credit” concept that matches payment-provider financing needs with XRP holders seeking yield via a proposed lending protocol amendment, and growing custody demand as banks move past safekeeping and into tokenization of deposits, funds, and traditional securities. At press time, XRP traded at $1.38. Featured image created with DALL.E, chart from TradingView.com
12 Feb 2026, 12:10
Ethereum developers propose system to use AI chatbots privately

Ethereum developers proposed a new way to protect people’s privacy using AI chatbots that allow users to make API calls without linking their requests to their real identities, while still paying providers and punishing abusers. Ethereum co-founder Vitalik Buterin and Ethereum Foundation AI lead Davide Crapis shared a blog post explaining that users can interact with large language models privately and prevent spam and cheating via zero-knowledge proofs. Ethereum developers build private way to pay for AI chatbots Vitalik Buterin and Davide Crapis say AI chatbots raise serious privacy concerns today because users share personal and sensitive information via API calls that can record, track, and sometimes connect those requests back to the owner. The developers of these chatbots say they can’t ignore the issue any longer, because the risk of personal data exposure keeps growing as people use AI every day. Because of this, Buterin and Crapis explain that AI providers can either ask users to sign in with an email address or pay with a credit card, or use blockchain payments for anonymity. If companies settle on email addresses and credit card payments because they’re more familiar, users’ privacy will be at risk, as every chatbot request links to someone’s real identity. This can lead to profiling, tracking, and even legal risks if people present these logs in court. For blockchain payments, users would have to pay on-chain for every request, but the process is slow and costly, and it creates a visible record of every message. Privacy when paying per request will be impossible again because the user’s transaction history will be easy to track. Ethereum developers are now proposing a new model in which a user deposits funds into a smart contract once and then makes thousands of private API calls. This way, the provider is sure the requests have been paid for, and the user doesn’t have to confirm their identity every time they interact with the chatbot. Buterin and Crapis say the new model will go a long way toward keeping people safe while allowing the technology to grow. Zero-knowledge proofs stop bad behavior without revealing user identity Ethereum developers say the system will use zero-knowledge cryptography to prevent cheating and abuse because it allows a user to prove something is true without revealing their identity. Vitalik Buterin and Davide Crapis explain that zero-knowledge tools will help honest users remain anonymous while exposing bad actors who try to break the rules. The new model will use a tool called Rate-Limit Nullifiers (RLN), which allows users to make anonymous requests and catch anyone who tries to cheat the protocol. This process begins when an account owner generates a secret key and adds funds to a smart contract, which is then used as a buffer for API calls. The account owner will fund the account once and then make private calls using the funds deposited, rather than making separate transactions each time they make an API call. This is an obvious limitation because an individual can make only as many calls as they deposited money for. Then, every time the user makes a request, the protocol assigns it a ticket index, and the user must produce a special proof, called a ZK-STARK, that they are still spending funds deposited with the protocol, as well as any refunds they are entitled to. At the same time, the system also processes refunds, as AI requests are not always of equal cost. The protocol also generates a unique nullifier for each ticket to prove usage and immediately identifies attempts to reuse the same ticket index for two different requests. According to Buterin and Crapis, abuse is not only double-spending, since some users may try to break the provider’s rules by sending harmful prompts, jailbreaks, or requests for illegal content such as weapon instructions. The protocol thus adds another layer called dual staking, where the user’s deposit is subject to strict math rules, and the other is subject to provider policy enforcement. If you're reading this, you’re already ahead. Stay there with our newsletter .









































