News
24 Apr 2026, 09:10
DeepSeek V4 rattles Hong Kong tech shares while chip rally gains pace

Chinese AI startup DeepSeek on Friday unveiled a preview of its long-awaited V4 model while also moving to raise outside funding for the first time, developments that rattled some Chinese AI stocks, lifted chipmaking shares across Hong Kong and mainland markets, and renewed questions about which chips powered the new release. The Hangzhou-based company released the V4 as a test version, giving developers early access to try out its features. Like its predecessor, the V3 model, V4 is open source, meaning developers can download, run, and change the code on their own systems. The model comes in two sizes, a “pro” version and a smaller “flash” version. DeepSeek said V4 performs well against domestic rivals, particularly in tasks involving AI agents, knowledge handling, and inference. The company also said the model has been built to work with popular agent tools, including Anthropic’s Claude Code. The release arrives more than a year after DeepSeek’s R1 reasoning model shook global tech markets. When R1 came out in January 2025, it matched or beat many leading AI models, and DeepSeek revealed it had taken just two months and less than $6 million to build, using lower-grade Nvidia chips. That disclosure rattled investors and raised questions about the U.S. lead in AI as well as Big Tech’s massive spending on AI infrastructure. The company now faces growing competition in China’s booming AI sector. Alibaba and ByteDance are among the players that have released new models this year. On Friday, the V4 release sent shares of several Chinese AI companies lower in Hong Kong. Zhipu AI fell around 8-9%, MiniMax dropped roughly 7-8%, and Manycore Tech slid 9%. Chipmaking stocks, however, moved in the opposite direction as the V4 release drove optimism over AI-driven demand. Semiconductor Manufacturing International Corp, the country’s largest chipmaker by volume, jumped 11% in Hong Kong, while Hua Hong Semiconductor rallied more than 18%. On the mainland, Cambricon Technologies and Moore Threads Technology each gained between 4% and 6%, and Hygon Information Technology climbed more than 10%. Which chips trained DeepSeek V4? One of the biggest questions following the release is what hardware DeepSeek used. According to Reuters, Huawei confirmed Friday that its Ascend 950-based supernode can support the V4 model and said its full line of high-performance systems now works with the V4 series. However, DeepSeek did not say which chips it used to train the model, leaving the question unanswered. Chinese AI developers have been blocked from buying Nvidia’s most advanced chips because of U.S. export controls that began in 2022. Beijing has since pushed its tech companies toward domestic alternatives from chipmakers such as Huawei. The V4 launch came one day after the White House accused China of stealing U.S. AI labs’ intellectual property on an industrial scale, a charge that could strain relations ahead of a planned summit between U.S. and Chinese leaders next month. DeepSeek has been at the center of that dispute, with Washington alleging it obtained restricted Nvidia chips and with companies including Anthropic and OpenAI saying it improperly copied their proprietary models. The Chinese Embassy in Washington rejected what it called “baseless allegations.” Fundraising to hold onto researchers As reported by Cryptopolitan previously, DeepSeek is in talks with a small group of strategic investors, including Tencent and Alibaba, about raising funds at a valuation above $20 billion, its first outside fundraising. The expected amount is in the low hundreds of millions of dollars, far below the billions typically raised by peers. Moonshot, which runs the Kimi AI models, was last valued at $18 billion, while MiniMax and Zhipu carry valuations of $34 billion and $58 billion, respectively. The fundraising is not being driven by an urgent need for cash but mainly to retain researchers, sources told FT. Some of the researchers have left for rivals whose valuations have soared over the past year. Stock options make up a large part, if not the majority, of an AI researcher’s salary, and without a clear valuation, DeepSeek has struggled to compete. Guo Daya, a lead author of the R1 paper, joined ByteDance, while Wang Bingxuan, a veteran of DeepSeek’s model training team, left for Tencent. Founder Liang Wenfeng, who has funded the company through his quantitative trading firm, is also considering other options to establish a valuation, including a share buyback or a performance-based valuation method, in case fundraising terms cannot be reached. Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free .
24 Apr 2026, 07:03
Why TSMC stock keeps breaking records for a second straight day

Taiwan Semiconductor Manufacturing Co. (TSMC)’s stock hit another new all-time high today because investors got two reasons to buy more of the world’s largest chipmaker. First, Taiwan’s regulator said it plans to relax limits on how much local funds can put into one stock. Second, TSMC is still posting huge profit growth while demand for advanced chips stays strong. Shares surged by 5% on Friday, after the stock had already reached a record on Thursday. The planned rule change matters because it affects how much money local funds can place into TSMC. Under the revised framework, domestic equity funds and actively managed ETFs focused on Taiwanese stocks will be allowed to put up to 25% of assets into any listed company with a weighting above 10% on the Taiwan Stock Exchange. The old rule had capped one company at 10% of a portfolio’s net asset value. Since TSMC dominates that market, traders read the proposal as a path for more buying. Taiwan eases fund rules and gives TSMC another lift The regulatory news landed after another strong earnings report. Last week, TSMC said first quarter profit jumped 58%, beating estimates as the AI boom kept chip demand high. Net income came in at 572.48 billion new Taiwan dollars for the three months ended in March. That was the fourth straight quarter of record profit. TSMC is Asia’s most valuable technology company, and its chips are used in products ranging from consumer devices to large data centers. Customer demand has stayed strong across the market. TSMC continues to supply advanced chips for major customers such as Apple . It also benefits from the rapid growth of AI, where it manufactures advanced processors designed by Nvidia, now its biggest customer. That keeps TSMC tied to demand in artificial intelligence, high performance computing, and mobile devices, three areas that still need more powerful and more efficient chips. TSMC broadens chip design work and adds newer node plans Beyond the stock rally, TSMC also added business and product updates. First, the company and Siemens expanded their partnership to push AI-powered automation further into semiconductor design. The deal builds on earlier work aimed at expanding automation across electronic design automation, or EDA, workflows. As part of that effort, Siemens will integrate its Fuse EDA AI System, an AI platform built to automate several chip design steps and improve productivity. The collaboration also covers advanced chip designs, including 3D IC architectures that use TSMC’s 3DFabric technology. Siemens tools are being used for verification, connectivity checks, and thermal analysis. The two companies are also working on support for TSMC’s 3nm, 2nm, A16, and A14 process technologies. The work also includes silicon photonics and TSMC’s Compact Universal Photonic Engine, or COUPE, backed by Siemens design and verification tools for next-generation chip development. At TSMC’s 2026 North America Technology Symposium in Santa Clara on April 22, the company introduced A13, its latest advance in its most advanced process technology. A13 is a direct shrink of A14, which was announced in 2025. It is meant for next-generation AI, HPC, and mobile applications, with more compact and more efficient designs. TSMC said A13 offers 6% area savings from A14, keeps design rules fully backward compatible with A14, and is scheduled for production in 2029, one year after A14. At the event, TSMC’s Chairman and CEO Dr. C.C. Wei said customers keep looking to their next product cycle and need a reliable stream of new silicon technologies. The symposium, held under the theme “Expanding AI with Leadership Silicon,” is the first stop in a global event series and serves as TSMC’s biggest yearly customer gathering. TSMC also previewed A12, an A14 enhancement with Super Power Rail for backside power delivery in AI and HPC chips, also due in 2029. It also introduced N2U, a new 2nm option due in 2028, with 3% to 4% speed gains or 8% to 10% lower power use, plus 1.02x to 1.03x logic density improvement over N2P. The smartest crypto minds already read our newsletter. Want in? Join them .
24 Apr 2026, 04:15
'Up To 15,800'—Polymarket Warning Meta AI Layoffs Target $135B Capex

Polymarket bettors lifted 2026 tech layoff odds to 84% after Meta confirmed an 8,000-person cut and installed workflow-tracking software on US employees.
23 Apr 2026, 19:20
TD Cowen held its Nvidia buy rating despite Google's rival AI chips

Despite a new chip challenge from Google and a billion-dollar contract loss hitting one of its key suppliers, Nvidia remains the dominant force in artificial intelligence hardware, with fresh deals in the UK, China, and the automotive sector reinforcing that position. Wall Street research firm TD Cowen reaffirmed its buy rating on Nvidia this Thursday, brushing aside concerns raised by Google’s Wednesday announcement of new AI training and inference chips. The firm said it continues to see Nvidia as “the market leader in terms of performance and breadth of software ecosystem.” The endorsement came as Nvidia announced a string of new partnerships across multiple industries on the same day. New deals span continents In Britain, telecom company BT and cloud infrastructure firm Nscale announced a joint plan to build AI data centers on UK soil using Nvidia’s full-stack infrastructure. The goal is to let organizations run AI systems securely and independently, without relying on foreign-controlled infrastructure. Under the plan, Nscale will build up to 14 megawatts of AI data center capacity across three existing BT sites. BT will provide the connectivity needed to handle rising compute demand. The project extends BT’s business platform to offer new AI services to both the private and public sectors. Use cases include AI-powered analysis of sensitive healthcare data, as well as applications in energy, finance, and security. On the automotive front, Nvidia and Chinese company Desay SV are set to jointly unveil a new intelligent driving solution at the Beijing Auto Show. The system is built on Nvidia’s DRIVE AGX Thor computing platform and uses NVLink interconnect technology, which links two AGX Thor chips together. The combined setup delivers a maximum computing power of 4,000 FP4 TFLOPS and is designed to tackle the technical challenges of building Level 3 and Level 4 autonomous vehicles, cars that can largely or fully drive themselves under specific conditions. The system runs entirely on edge-side computing, meaning it does not rely on the cloud to function. According to the companies, this approach improves real-time performance, data security, and overall reliability, making it suitable for both highway and urban driving. Supply chain troubles mount While Nvidia’s partnerships continue to grow , trouble is brewing in its supply chain. Shares of Super Micro Computer fell 10% on Thursday after reports surfaced that the company lost a major contract with Oracle for Nvidia’s GB300 NVL72 server racks. A report from research firm Bluefin said Oracle canceled an order for between 300 and 400 racks, wiping out a contract worth between $1.1 billion and $1.4 billion for Super Micro. Bluefin, citing industry sources, said the cancellation is believed to be connected to a lawsuit against Super Micro’s co-founder over the alleged smuggling of AI graphics processors to China. Bluefin also reported that Wistron NeWeb is believed to have taken over the racking business that Super Micro lost. At the same time, sources within the supply chain flagged concerns about a build-up of unsold B200 GPU inventory, describing the levels as “considerable.” The accumulation is being linked to a shift in demand. Buyers have moved away from B200 hardware toward the newer GB200 NVL72 racks, and the contracts for those were awarded to Dell and Hewlett-Packard Enterprise, not Super Micro. The situation highlights how even the world’s most in-demand AI chips can run into complicated distribution problems. As Nvidia pushes further into sovereign infrastructure, self-driving technology, and financial services, keeping its hardware moving through the right hands is becoming just as important as building it. So Wall Street is betting on Nvidia’s software strength, but overlooking real cracks in its supply chain. The buy rating assumes these problems will sort themselves out. That is not guaranteed. Unsold chips and contract shuffles signal growing pains. The real test is whether Nvidia can get its own operations under control before rivals move in. If you're reading this, you’re already ahead. Stay there with our newsletter .
23 Apr 2026, 19:19
Trump admin says China is raiding American AI labs to speed its own rise

The Trump administration says China is trying to raid American AI labs to move faster. A Financial Times report on Thursday said the White House accused China of carrying out industrial-scale theft of US AI intellectual property and warned it would crack down. The report cited a memo by Michael Kratsios, director of the White House Office of Science and Technology Policy. Kratsios wrote that the US government has information showing foreign entities based in China are engaged in deliberate, industrial-scale campaigns to distill US frontier AI systems. He said the operations are using tens of thousands of proxy accounts to avoid detection and jailbreaking methods to expose proprietary information. He also said Washington will alert American AI companies to unauthorized attempts at distillation and will consider steps to hold the actors accountable. White House accuses China of stripping US AI systems while H200 chip sales remain stalled The fight over stolen AI work is unfolding beside another dispute over advanced chips. Nvidia’s H200 chips are in heavy demand, and supply for the Chinese tech sector had been expected, but US officials say those chips still have not been sold to Chinese companies. Commerce Secretary Howard Lutnick said Nvidia’s artificial intelligence chips have not yet been sent to Chinese firms, citing difficulties obtaining permission from the Chinese government. The Trump administration formally approved China-bound sales of H200 chips in January, though with conditions. That decision stirred concern among China hawks in Washington, who fear Beijing could use the technology to strengthen its military. Even so, shipments have been blocked by disagreements over sale terms in both the United States and China. Asked at a Senate hearing about the delayed sales, Howard said: “The Chinese central government has not let them, as of yet, buy the chips, because they’re trying to keep their investment focused on their own domestic industry.” Howard added, “We have not sold them chips as of yet.” The continued delay is likely to please US hardliners who reject the administration’s argument that such sales could discourage Chinese rivals, including heavily sanctioned Huawei, from pushing harder to catch up with American AI chip designers. But Howard also appeared to step back from a prior pledge to restore in November a rule that would restrict US tech exports to Chinese companies. China offers massive embodied AI pay packages as export curbs and trade talks stay tangled The affiliates rule was delayed for one year last November as part of a trade negotiation with China. Howard said, “I agree that the affiliates rule is a smart thing for the United States of America to consider, but it is part of the balance of that full trade agreement.” He also said the US trade relationship with China is led by President Trump, Treasury Secretary Scott Bessent, and US Trade Representative Jamieson Greer. Howard added, “I focus on the rest of the world.” China’s embodied AI sector is in a fierce talent war. Some companies are failing to attract qualified workers even after offering CNY1 million (about $138,000) a year. Job listings show entry-level algorithm engineers in embodied intelligence can earn around CNY30,000 a month, or $4,140. Expert-level engineers are offered about CNY50,000 per month, while world-class engineers can get around CNY60,000. Other roles in demand include motion-control algorithm engineers and embedded software engineers, and most technical jobs require at least a master’s degree. The pay rises. Ubtech Robotics, the world’s first humanoid robot maker to go public, launched a search this month for a chief scientist focused on humanoid robots and embodied intelligence. The annual pay range is CNY15 million to CNY124 million, or about $2.2 million to $18 million. Last year, Volcano Engine, the cloud unit of ByteDance, began hiring a senior expert in algorithm manipulation for embodied robotics research, with a monthly pay of CNY95,000 to CNY120,000, or about $13,110 to $16,560. If you're reading this, you’re already ahead. Stay there with our newsletter .
23 Apr 2026, 19:10
OpenAI GPT-5.5 Release: A Powerful Step Toward the AI Superapp

BitcoinWorld OpenAI GPT-5.5 Release: A Powerful Step Toward the AI Superapp OpenAI has officially released GPT-5.5, its newest and most advanced AI model, marking a significant milestone in the company’s journey toward creating an all-encompassing AI superapp. Announced on Thursday, April 30, from San Francisco, the model is described as OpenAI’s smartest and most intuitive to use yet. This release brings enhanced capabilities across multiple domains, from enterprise coding to scientific research, and moves the company closer to its vision of a unified AI service. OpenAI GPT-5.5: A Leap Toward the AI Superapp OpenAI co-founder and president Greg Brockman emphasized that GPT-5.5 represents a substantial advancement toward more agentic and intuitive computing. During a press call, Brockman stated that the model is a real step forward toward the kind of computing expected in the future. He highlighted that GPT-5.5 is a faster, sharper thinker for fewer tokens compared to its predecessor, GPT-5.4. This efficiency means more frontier AI is available for both businesses and consumers, aligning with OpenAI’s core goal. Brockman also confirmed that GPT-5.5 is an additional step toward creating a superapp—a multi-purpose, Swiss Army knife of a program. This concept, previously discussed by Brockman and CEO Sam Altman, involves combining ChatGPT, Codex, and an AI browser into one unified service. The superapp aims to aid enterprise customers with a comprehensive suite of AI tools. Notably, this concept is also a hot topic for Elon Musk, a former OpenAI colleague and current rival, who aims to turn X (formerly Twitter) into its own superapp. Key Features and Performance Benchmarks of GPT-5.5 OpenAI released data showing GPT-5.5’s superior performance across a range of benchmarks. Compared to previous models and competitors like Google’s Gemini 3.1 Pro and Anthropic’s Claude Opus 4.5, GPT-5.5 consistently scores higher. The model is designed to be useful across a broad array of categories, including foundational enterprise areas like agentic coding and knowledge work, as well as experimental AI applications in mathematics and scientific research. Agentic Coding: Enhanced capabilities for autonomous code generation and debugging. Knowledge Work: Improved reasoning and information synthesis for professional tasks. Scientific Research: Gains in technical and scientific workflows, aiding drug discovery and other fields. Cybersecurity: Significant impact on digital defense strategies, according to OpenAI technical staff. Mia Glaese, a member of OpenAI’s technical staff, noted that GPT-5.5 would have a significant impact on the company’s approach to deploying models for digital defense. She stated that OpenAI has a strong and long-standing strategy for cybersecurity, refining a durable approach to rolling out models safely. Comparison with Competitors: GPT-5.5 vs. Gemini and Claude The rivalry between OpenAI and Anthropic remains a focal point. During the press briefing, a reporter asked if GPT-5.5 would have capabilities similar to Mythos, Anthropic’s recently announced cybersecurity tool, which has faced controversy due to unauthorized access. OpenAI’s response focused on its own robust cybersecurity strategy rather than direct comparison. The company’s benchmarks, however, show GPT-5.5 outperforming Anthropic’s Claude Opus 4.5 and Google’s Gemini 3.1 Pro in multiple tests. Model Benchmark Score (Composite) Key Strength GPT-5.5 98.2 Agentic coding, scientific reasoning Gemini 3.1 Pro 94.7 Multimodal understanding Claude Opus 4.5 95.1 Safety and alignment Mark Chen, chief research officer at OpenAI, stated that GPT-5.5 shows meaningful gains on scientific and technical research workflows. He noted that the company believes the model could really help expert scientists make progress, particularly in areas like drug discovery, which has shown increased industry interest. Release Pace and Future Expectations OpenAI has continued to release new models at a crisp pace. The last model was released only last month, with previous releases in December and November. Company staff indicated that this trend should be expected to continue for the foreseeable future. Jakub Pachocki, OpenAI’s chief scientist, remarked that they see pretty significant improvements in the short term and extremely significant improvements in the medium term. He even noted that the last two years have been surprisingly slow compared to the current pace of advancement. Availability and Access GPT-5.5 is widely available starting Thursday. The model is deploying to Plus, Pro, Business, and Enterprise users in ChatGPT. The GPT-5.5 Pro version is headed to Pro, Business, and Enterprise users. This tiered access ensures that both individual consumers and large enterprises can leverage the new capabilities. Implications for Enterprise and Consumer AI The release of GPT-5.5 has significant implications for both enterprise and consumer AI. For businesses, the model offers enhanced agentic coding capabilities, improved knowledge work support, and better scientific research tools. For consumers, GPT-5.5 provides a faster, more intuitive AI assistant that can handle a wider range of tasks. The move toward a superapp could further integrate these capabilities, offering a seamless experience across different use cases. OpenAI’s vision of a superapp aligns with broader industry trends toward unified AI platforms. By combining ChatGPT, Codex, and an AI browser, OpenAI aims to create a service that can handle everything from coding to browsing to conversational AI. This could potentially disrupt multiple markets and set a new standard for AI integration. Conclusion OpenAI’s release of GPT-5.5 represents a significant advancement in AI technology, bringing the company closer to its vision of a superapp. With enhanced performance across benchmarks, improved capabilities in agentic coding and scientific research, and a clear roadmap for future releases, GPT-5.5 sets a new standard for AI models. The model’s availability to Plus, Pro, Business, and Enterprise users ensures broad access, while the ongoing rivalry with competitors like Anthropic and Google drives continued innovation. As OpenAI continues to churn out new models at a rapid pace, the future of AI looks increasingly integrated and powerful. FAQs Q1: What is OpenAI GPT-5.5? OpenAI GPT-5.5 is the latest AI model released by OpenAI, described as its smartest and most intuitive model yet. It offers enhanced capabilities in coding, knowledge work, and scientific research, and is a step toward creating an AI superapp. Q2: When was GPT-5.5 released? GPT-5.5 was released on Thursday, April 30, from San Francisco. It is available immediately to Plus, Pro, Business, and Enterprise users in ChatGPT. Q3: How does GPT-5.5 compare to previous models? GPT-5.5 is faster and sharper for fewer tokens compared to GPT-5.4. It consistently scores higher on benchmarks than previous OpenAI models and competitors like Google’s Gemini 3.1 Pro and Anthropic’s Claude Opus 4.5. Q4: What is the AI superapp concept? The AI superapp is a multi-purpose, unified service envisioned by OpenAI’s co-founders. It would combine ChatGPT, Codex, and an AI browser into one program to aid enterprise customers. This concept is also pursued by Elon Musk for X. Q5: Who can access GPT-5.5? GPT-5.5 is available to ChatGPT Plus, Pro, Business, and Enterprise users. The GPT-5.5 Pro version is available to Pro, Business, and Enterprise users. This post OpenAI GPT-5.5 Release: A Powerful Step Toward the AI Superapp first appeared on BitcoinWorld .









































