News
23 Jan 2026, 18:00
Lenovo plans multi-model AI strategy with Mistral, Alibaba, and DeepSeek

Lenovo is aiming to partner with Humain, Mistral AI, Alibaba, and DeepSeek rather than build its own large language models (LLMs) as part of its plan to repeat its 2025 success this year. Through partnerships with leading AI companies, Lenovo is saving itself from having to navigate complex global regulations while still providing regions with the needed AI solutions. Lenovo ended 2025 as a leader in the PC industry, shipping 71 million units. However, increased memory and storage prices could pose a challenge in 2026. Lenovo aims to become an AI leader through multi-model partnerships At the 2026 World Economic Forum in Davos, Lenovo Group’s Chief Financial Officer, Winston Cheng, detailed the company’s plan to become a leader in the global artificial intelligence (AI) market. Instead of creating its own AI models, Lenovo is striking deals with the world’s top developers to power its next generation of devices. Through partnering with multiple firms, Lenovo aims to navigate complex global regulations and provide regional AI solutions that work within its massive ecosystem of Windows and Android devices, unlike competitors like Apple, which currently limits its AI integrations to OpenAI and Google’s Gemini. Lenovo is calling its strategy the “orchestrator approach.” According to its CFO Winston Cheng, the company does not want to compete with model developers. Instead, it wants to be the platform where these models run. Different countries have different rules for AI data and security. For example, AI models used in China must follow different standards than those used in Europe or the Middle East. To meet specific local needs, Lenovo is lining up partners in every major market. In Europe, they are looking toward Mistral AI. In China, they are working with Alibaba and DeepSeek. In the Middle East, Lenovo is eyeing a partnership with Humain , a Saudi-based AI initiative. Cheng also noted that Lenovo is the only company besides Apple that holds a significant market share in both the PC and mobile phone markets. How will these AI partnerships change the way people use their devices? Lenovo’s new built-in cross-device intelligence system called “Qira” was unveiled earlier this month at CES 2026. Qira is described as “Personal Ambient Intelligence” that stays active across laptops, tablets, and smartphones. Qira can summarize meetings, help draft documents, and even predict a user’s “next move” by looking at their calendar and files. By integrating models from partners like Alibaba and Mistral AI directly into the Qira system, Lenovo can offer high-speed AI performance without forcing users to open separate apps. Lenovo and Nvidia recently introduced the “AI Cloud Gigafactory” that uses Lenovo’s Neptune liquid-cooling technology and Nvidia’s advanced chips, including the new Vera Rubin NVL72 architecture, to build massive data centers. These “gigafactories” are designed to help AI cloud providers set up operations in weeks rather than months. Cheng mentioned that Lenovo and Nvidia are focusing on the global deployment of these systems and plan to expand in Asia and the Middle East. Global PC shipments grew by over 9% in 2025, and Lenovo closed the year as the market leader with 71 million units shipped. However, memory and storage prices increased by as much as 40% to 70% throughout 2025. Due to the rising costs, Lenovo plans to increase prices for consumers to protect its profit margins. The smartest crypto minds already read our newsletter. Want in? Join them .
23 Jan 2026, 17:45
Tesla removes Autopilot from new Model 3 and Model Y vehicles in US and Canada

Tesla on Friday discontinued its driving assistance system, Autopilot, from all its new Model Y and Model 3 vehicles in the U.S. and Canada. The firm stated that the initiative aims to boost the company’s adoption of a more advanced version of its Full-Self Driving (Supervised) technology. The decision follows a 30-day suspension of its manufacturing and dealer licenses in California last month. In December, the California DMV accused the tech company of engaging in deceptive marketing by overstating the capabilities of its driving assistance system and FSD for years. A judge gave Tesla 60 days to adhere to the rules by discontinuing the Autopilot name. Tesla’s new cars come standard with Traffic Aware Cruise Control NEWS: Tesla has officially discontinued Autopilot in the U.S. and Canada. All new car purchases now come standard with Traffic-Aware Cruise Control. The online configurator has now been updated to allow buyers to choose the $99/month FSD subscription, while still offering the… pic.twitter.com/I4so2m6vkk — Sawyer Merritt (@SawyerMerritt) January 23, 2026 The driving assistance system included Traffic Aware Cruise Control and Autosteer; however, the company’s online configuration site shows that new cars now come standard only with Traffic Aware Cruise Control. The firm stated that the new capabilities will ensure cars maintain a designated speed while keeping a safe distance from cars ahead. The tech firm has not yet confirmed if current customers will be affected by the change. Cryptopolitan reported last week that Tesla also revealed plans to stop charging a one-time $8,000 fee for the FSD software starting February 14. The change will enable customers to access FSD through a $99 monthly subscription. Elon Musk revealed on Thursday that the subscription price will increase as the software’s capabilities improve. Musk is confident that new cars will enable unsupervised driving, despite texting while driving being illegal in almost all U.S. states. “I should also mention that the $99/month for supervised FSD will rise as FSD’s capabilities improve. The massive jump is when you can be on your phone or sleeping for the entire ride (unsupervised FSD).” – Elon Musk , CEO of Tesla. Tesla introduced the first robotaxi versions of its Model Y SUVs on Thursday in Austin, Texas. Musk revealed that the new cars will have no human safety monitoring personnel. He also called for those interested in solving real-world AI to join Tesla AI. The techpreneur argued that solving real-world AI for Optimus will be 100 times harder than cars. The electric car manufacturer launched its robotaxis in Texas last June, which came with a safety operator in the front passenger seat. Tesla’s AI lead, Ashok Elluswamy, also revealed that not all cars will be fully driverless. Tesla confirmed that the new cars run a more advanced version of its driving software and still follow the firm’s oversight. Tesla aims to hit 10 million active FSD subscribers by 2035 Musk has consistently maintained that adoption of Full Self-Driving software lags his expectations since the beta version launched in late 2020. The firm’s Chief Financial Officer, Vaibhav Taneja, revealed in October that only 12% of all customers had paid for the software. He also stated that Tesla’s product goal is to hit 10 million active FSD subscriptions by 2035. He said the initiative aims to align with Musk’s requirement to receive the full payout of his new $1 trillion pay package. Tesla introduced the FSD feature in the early 2010s after partnering with Google’s autonomous-car division to leverage its technology. The car manufacturer made Autopilot standard on all of its vehicles in April 2019. Tesla has struggled with revealing Autopilot’s capabilities since introducing the software more than a decade ago. The lack of communication and overpromised statements by Tesla led some drivers to become overly confident in the system’s abilities, resulting in hundreds of crashes. The National Highway Traffic Safety Administration (NHTSA) reported that the company’s EVs have been involved in hundreds of crashes and at least 13 fatalities. The agency argued that Tesla’s weak driver engagement system was not appropriate for Autopilot’s permissive operating capabilities. If you're reading this, you’re already ahead. Stay there with our newsletter .
23 Jan 2026, 17:30
Stablecoins May Soon Power Payments Made Entirely By AI—CEO

Circle’s chief executive painted a brisk picture at Davos this week: autonomous software agents that act for people could be using stablecoins to pay for everyday things within three to five years. He said these agents will need a money system that is stable, fast, and programmable. That, he argued, points to stablecoins as the likely choice. Related Reading: Bitcoin’s Sharp Reversal Leaves Over $800 Million Liquidated In 1 Day AI Agents And Money According to reports, Jeremy Allaire of Circle said “literally billions” of AI agents may be transacting on behalf of users in the near term. “Three years, five years from now, one can expect that there will be billions, literally billions of AI agents conducting economic activity in the world on a continuous basis,” Allaire said during the World Economic Forum in Davos, Switzerland. He described work on new networks and tools aimed at letting software act like small businesses or helpers that buy services, settle bills, and tip content creators. This idea is simple on the surface: software needs a reliable unit of account when it spends, and tokenized dollars can fit that role. Building The Tools Reports say companies across the crypto and tech world are racing to build the plumbing for this future. Circle is pitching USDC as a neutral payments layer that software can plug into. Other firms are testing protocols that let a machine sign off on a payment when certain conditions are met. Some large tech groups are also exploring ways for their platforms to let software pay for services automatically. Progress is visible, but the path is not yet clear. What Regulators Might Ask Regulators will have questions. Reports note concerns about money flow, consumer protections, and where bank deposits sit if stablecoins grow rapidly. At Davos, the CEO pushed back on the idea that stablecoins would drain bank deposits the way some fear, saying comparisons to other financial instruments are more fitting. Still, lawmakers in the US and elsewhere are watching closely. Rules could move faster if policy makers see real volume coming from so-called agentic commerce. New Networks, New Risks Based on reports, the technical choices will shape both convenience and danger. If agents can move value at scale, fraud and theft risks may rise too. Related Reading: Bitcoin Influencers Get Spotlight In X’s New ‘Starterpacks’ Systems will need clear identity checks, fault handling, and ways to stop runaway payments. Some safety work is already under way, but much remains to be designed and tested. Featured image from Pexels, chart from TradingView
23 Jan 2026, 17:30
Meta AI Teen Safety: Critical Pause on Character Access Precedes Tailored Youth Version

BitcoinWorld Meta AI Teen Safety: Critical Pause on Character Access Precedes Tailored Youth Version In a decisive move reflecting growing industry-wide pressure, Meta has announced a global pause on teen access to its AI characters across all apps, opting to develop a specially tailored version designed explicitly for younger users. This strategic shift, confirmed exclusively to Bitcoin World, arrives amidst escalating legal challenges and intensifying regulatory scrutiny concerning teen safety and mental health on digital platforms. The company frames this not as an abandonment of its AI ambitions, but as a necessary recalibration to prioritize age-appropriate interactions and robust parental oversight. Meta AI Teen Safety: The Global Pause Explained Meta’s announcement marks a significant policy reversal. Starting in the coming weeks, the company will restrict all teen access to its suite of AI characters. This restriction applies not only to users who have provided a teen birthday but also extends to accounts Meta’s age-prediction technology flags as potentially belonging to minors. The decision follows direct feedback from parents seeking greater insight and control over their children’s interactions with generative AI. Consequently, Meta is implementing a “hardened approach” by completely disabling this feature for teens until the redesigned experience is ready for deployment. This pause supersedes previously previewed controls that would have allowed parents to monitor and block specific characters. Regulatory Pressure and Legal Backdrop Meta’s timing is conspicuously aligned with mounting legal pressures. The announcement precedes a critical trial in New Mexico where the company faces accusations of failing to protect children from sexual exploitation on its platforms. Furthermore, Wired reported that Meta has sought to limit legal discovery related to social media’s impact on teen mental health. Separately, Meta confronts another trial next week alleging its platforms cause social media addiction, with CEO Mark Zuckerberg expected to testify. These concurrent legal battles underscore the heightened regulatory environment compelling tech giants to proactively demonstrate duty of care, especially concerning vulnerable user groups like teenagers. A Broader Industry Trend Toward Youth Safeguards Meta’s action is not an isolated incident but part of a broader corrective trend within the AI industry. Following lawsuits alleging AI tools aided self-harm, other companies have instituted similar protective measures. For instance, in October, Character.AI banned open-ended chatbot conversations for users under 18. OpenAI has also introduced new teen safety rules for ChatGPT and deployed age-prediction technology to apply content filters. This collective shift indicates a nascent industry standard is emerging, prioritizing guarded, structured AI interactions for minors over unfettered access. Blueprint for the Future: The Tailored Teen AI Experience Meta has outlined core principles for its forthcoming teen-specific AI characters. The new system will feature built-in parental controls from the outset, granting guardians definitive authority. More fundamentally, the AI characters themselves will be engineered to deliver age-appropriate responses. Their conversational domains will be intentionally limited to constructive topics such as education, sports, and hobbies. This design philosophy echoes the PG-13 movie rating inspiration behind parental control features Meta rolled out in October, which restricted teen exposure to content involving extreme violence, nudity, or graphic drug use. The goal is to create a sandboxed AI environment that fosters positive engagement while mitigating potential risks. Timeline of Meta’s Recent Teen Safety & AI Actions Date Action Focus October 2024 Rolled out new parental controls Inspired by PG-13 rating; restricted access to mature content. October 2024 Previewed controls for AI characters Allowed parents to monitor/block specific AI characters. Early 2025 Announced global pause on teen AI access Disabled feature entirely pending new tailored version. Future (TBD) Launch of teen-specific AI characters Age-appropriate responses, built-in parental controls, limited topics. The Critical Role of Parental Controls and Age Verification The efficacy of Meta’s new strategy hinges on two technical pillars: sophisticated age verification and granular parental controls. The company will rely on a combination of user-provided birthdates and its proprietary age-prediction technology to enforce access restrictions. This dual approach aims to circumvent attempts by minors to bypass safeguards. For parents, the promised controls are designed to be comprehensive, potentially including the ability to: Completely disable AI character chats. Review interaction histories or receive activity summaries. Approve or block specific AI characters or conversation topics. These tools represent a significant evolution from earlier, more passive monitoring options, shifting toward proactive parental management. Expert Angle: Balancing Innovation with Protection Industry analysts view this pause as a necessary, albeit reactive, step in the responsible development of consumer AI. The rapid deployment of generative AI features has often outpaced the establishment of corresponding ethical guardrails, particularly for youth. Meta’s decision to halt and redesign reflects a growing acknowledgment within the tech sector that AI interactions for minors require a fundamentally different framework—one that prioritizes safety and developmental appropriateness over engagement metrics. The success of this initiative will depend on transparent collaboration with child safety experts, educators, and parents during the development phase. Conclusion Meta’s global pause on teen access to AI characters signifies a pivotal moment in the maturation of consumer artificial intelligence. Driven by legal challenges, regulatory scrutiny, and genuine user feedback, the company is prioritizing the development of a safeguarded, teen-specific AI experience. This move aligns with a wider industry trend toward implementing stricter youth protections for generative AI tools. The forthcoming tailored version, with its emphasis on built-in parental controls and age-appropriate content, will serve as a critical test case for balancing innovative AI engagement with the paramount responsibility of protecting younger users online. The outcome will likely influence safety standards across the entire social media and AI landscape. FAQs Q1: Why did Meta pause teen access to AI characters? Meta paused access globally in response to parent requests for more control and insight. The company is using this time to develop a specially tailored AI experience for teens with built-in safety features and parental controls, rather than offering the existing, less restricted version. Q2: When will teens be able to use AI characters on Meta apps again? Access will remain paused until Meta completes development and launches the new teen-specific AI character experience. The company has not provided a specific public release date for the updated system. Q3: What will be different about the new teen AI characters? The new characters will be designed to give age-appropriate responses and will be restricted to topics like education, sports, and hobbies. They will launch with integrated parental controls, allowing guardians to monitor and manage their teen’s interactions from the start. Q4: How does Meta know if a user is a teen? Meta uses a combination of the birthday information a user provides and its own age-prediction technology. The pause applies to all accounts identified as teens through either method. Q5: Are other AI companies making similar changes for teen safety? Yes, this is part of an industry-wide trend. Companies like Character.AI and OpenAI have also recently introduced new restrictions, safety rules, and age-verification measures for their AI tools when used by minors. This post Meta AI Teen Safety: Critical Pause on Character Access Precedes Tailored Youth Version first appeared on BitcoinWorld .
23 Jan 2026, 17:30
WTO signals possible upside to global trade as AI investment accelerates

The growing trade in artificial intelligence equipment might lift worldwide commerce beyond current estimates this year, according to the head of the World Trade Organization, even as concerns about American tariffs loom over the global economy. Ngozi Okonjo-Iweala, who leads the WTO, told Bloomberg Television on Friday that AI-related investment accounts for 42% of the increase in goods trade expected for 2025. This includes computer hardware, software, and infrastructure needed for data centers. Trade projections may be revised upward The Geneva-based organization predicted in October that global merchandise trade would grow by only 0.5% this year. That modest figure takes into account the impact of import taxes imposed by US President Donald Trump. But Okonjo-Iweala now sees room for improvement. “However, we see a real potential upside,” she said during the interview. “If this kind of pace of trade in AI goods continues, then we will potentially see larger numbers than what we have projected.” The WTO director general said her organization plans to review its projections soon. She pointed to the recent trade agreement between the United States and China, along with ongoing talks between the European Union and China, as critical factors for keeping international trade healthy. Despite trade tensions, Okonjo-Iweala said the United States remains involved at the WTO and is putting forward ideas for changing how the institution operates. Speaking on the final day of the World Economic Forum in Davos, Switzerland, she described the week’s mood as shifting from worry to cautious optimism. “The atmosphere went from a great deal of apprehension to one of a little more hope,” she said. A research paper released at the Davos meeting argues that countries should rethink how they approach AI infrastructure spending. The document, written jointly by the World Economic Forum and consulting firm Bain & Co, says no single nation can realistically build all components of the AI technology stack on its own. The authors advise considering the development of AI as “strategic interdependence” as opposed to total self-sufficiency. This implies that nations should create alliances with reliable friends while making strategic investments domestically. The research shows that the United States and China dominate the AI landscape, capturing roughly 65% of global investment across the entire AI value chain. This covers everything from semiconductor chips and cloud computing to software applications. For smaller and medium-sized countries, this concentration of resources creates competitive challenges. AI infrastructure, particularly data centers and computing power, is now seen as essential for national AI capabilities. The paper suggests that countries moving quickly can still find success by concentrating on specific areas, joining forces with neighboring nations, or securing access through partnerships instead of trying to match the American and Chinese models. Jobs will be enhanced or eliminated While AI equipment trade provides economic benefits, the technology’s effect on workers presents difficult questions. Kristalina Georgieva, speaking to attendees in Davos , shared findings from International Monetary Fund research on how AI will reshape job markets. “We expect over the next years, in advanced economies, 60% of jobs to be affected by AI, either enhanced, eliminated, or transformed – 40% globally,” Georgieva said. “This is like a tsunami hitting the labour market.” In developed nations, one out of every 10 jobs has already been improved by AI, according to the IMF chief. Workers in these enhanced positions tend to earn more money, which benefits their local communities. However, Georgieva warned that AI threatens positions typically filled by young people entering the workforce. Entry-level jobs often involve tasks that AI can now handle, making it harder for younger workers to find good positions. “Tasks that are eliminated are usually what entry-level jobs do at present, so young people searching for jobs find it harder to get to a good placement,” she explained. Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free .
23 Jan 2026, 17:25
Google Photos Me Meme: The Revolutionary AI Feature That Lets You Create Personalized Memes

BitcoinWorld Google Photos Me Meme: The Revolutionary AI Feature That Lets You Create Personalized Memes Google has officially launched its experimental ‘Me Meme’ feature in Google Photos, introducing a novel way for U.S. users to transform personal photos into shareable memes using advanced generative AI technology. This announcement, made on January 23, 2026, represents Google’s latest effort to integrate artificial intelligence into everyday creative tools, following months of development first spotted by Android Authority in October 2025. Google Photos Me Meme: How the AI Feature Works The ‘Me Meme’ functionality operates through Google’s Gemini AI technology, specifically utilizing the Nano Banana image model that powers other creative features within the Photos application. Users can select from various meme templates or upload their own, then tap ‘add photo’ to insert an image of themselves or others. After selecting ‘Generate,’ the AI processes the input to create a personalized meme image. Google emphasizes that this experimental feature may not perfectly match original photos, recommending users upload well-lit, focused, and front-facing images for optimal results. The company continuously adds new templates to expand creative possibilities. Once generated, users can save the meme, share it across platforms, or tap ‘regenerate’ for alternative AI interpretations. The Technical Foundation Behind Personalized AI Memes Google’s ‘Me Meme’ feature builds upon the company’s established Gemini AI infrastructure, which has demonstrated significant advancements in image generation and manipulation. The technology represents a specialized application of generative adversarial networks (GANs) trained on vast datasets of meme formats and human facial features. Industry Context and Competitive Landscape This development follows broader industry trends toward personalized AI content creation. OpenAI previously found success with its Sora app, which enables users to create AI videos featuring themselves and friends. Similarly, numerous social media platforms have integrated basic AI filters and effects. Google’s approach distinguishes itself through deeper integration with personal photo libraries and the sophisticated Gemini architecture. The feature’s initial U.S.-only rollout follows a common pattern for Google’s experimental releases, allowing the company to gather user feedback and optimize performance before potential global expansion. This strategy mirrors previous Google Photos feature launches, including the AI-powered photo restoration and style transfer tools that debuted in limited markets before wider availability. Strategic Implications for Google’s Ecosystem While seemingly lighthearted, the ‘Me Meme’ feature serves strategic purposes within Google’s broader ecosystem. These types of engaging AI tools encourage regular user interaction with the Photos application, potentially increasing user retention and data engagement metrics. Furthermore, they demonstrate practical applications of Google’s AI research in consumer-friendly formats. The feature’s location within the ‘Create’ tab of Google Photos positions it alongside other creative tools, suggesting Google views AI-powered personalization as a natural extension of photo management rather than a separate category. This integration approach contrasts with standalone AI apps from competitors, potentially offering users a more seamless experience within their existing workflow. User Experience Considerations and Best Practices For optimal results with the ‘Me Meme’ feature, users should consider several factors: Image Quality: High-resolution, well-lit photos with clear facial features produce better meme generation Template Selection: Different meme formats work better with various photo types and expressions Experimental Nature: Users should expect variations in output quality as Google refines the AI model Privacy Considerations: Generated memes remain within user control for sharing decisions Google’s documentation indicates the feature processes images locally when possible, though complex generations may utilize cloud-based AI processing. This hybrid approach balances performance with privacy considerations, particularly important for personal photo content. Future Developments and Industry Trends The ‘Me Meme’ launch occurs alongside significant industry events, including the upcoming Bitcoin World Disrupt 2026 conference in San Francisco from October 13-15, 2026. This event will feature leaders from Google Cloud, Microsoft, Netflix, and other major technology companies discussing AI innovation and implementation strategies. Industry analysts observe that personalized AI features represent a growing trend across technology platforms. As AI models become more sophisticated and accessible, companies increasingly integrate them into existing applications rather than developing separate tools. This approach reduces user friction while demonstrating practical AI applications beyond technical demonstrations. Conclusion Google Photos’ ‘Me Meme’ feature represents a significant step in making advanced AI technology accessible for everyday creative expression. By allowing U.S. users to generate personalized memes from their own photos, Google demonstrates practical applications of its Gemini AI system while engaging users with interactive content creation tools. As the feature evolves from its experimental phase, it may influence broader trends in AI integration across consumer applications, potentially expanding to global markets with enhanced capabilities based on user feedback and technological advancements. FAQs Q1: What is the Google Photos Me Meme feature? The Me Meme feature is an experimental AI tool within Google Photos that allows users to create personalized memes by combining their photos with various meme templates using Google’s Gemini AI technology. Q2: Where is the Me Meme feature available? Currently, the feature is available only to users in the United States as part of Google’s initial experimental rollout, with potential expansion to other regions based on testing results and user feedback. Q3: What AI technology powers the Me Meme feature? The feature utilizes Google’s Gemini AI system, specifically the Nano Banana image model that also powers other creative AI features within Google Photos, such as style transfers and image recreation tools. Q4: How do I access the Me Meme feature in Google Photos? When available to your account, the feature appears under the ‘Create’ tab within the Google Photos application. You may need to update to the latest version of the app to see the feature. Q5: What types of photos work best with the Me Meme feature? Google recommends using well-lit, focused, front-facing photos for optimal results, as the AI can better recognize and integrate clear facial features into meme templates. This post Google Photos Me Meme: The Revolutionary AI Feature That Lets You Create Personalized Memes first appeared on BitcoinWorld .






































