News
17 Jan 2026, 09:28
Former lawmaker says UK laws fall short in curbing Grok harms

A former UK lawmaker has revealed that the approach of the United Kingdom in terms of its laws will not reduce the harm caused by Grok. This is coming after UK ministers responded to the backlash faced by Grok, Elon Musk’s artificial intelligence chatbot, by fast-tracking legislation to ban the generation of non-consensual intimate images. According to the former UK lawmaker, the country is following a “what-a-mole” approach to regulating big technology companies. While there has been increased support for the law, experts have also warned that the changes may not go far to limit the harms posed by generative AI chatbots. “It looks like we are behind the curve, because we are,” says Harriet Harman, a former deputy Labour leader. “And it looks like we’re running to catch up, because we are. And it looks like we’ve got a scattergun approach, because we have.” Former lawmaker says the UK is behind in AI regulation According to the former UK lawmaker, this includes the country failing to clarify what the law should classify as “intimate” imagery. Although in the US, lawmakers have described it as depicting nudity or underwear, backbenchers and ministers have argued that the creation of non-consensual images of women and children in bikinis and wet T-shirts using Grok shows a significant weakness in the approach. Technology secretary Liz Kendall has also noted that the law is being aimed at nudification applications and may not even apply to Grok. According to Clare McGlynn, a professor of law at Durham University, the nudification ban is not a solution that will tackle the generation of sexual images with Grok, noting that it won’t even apply to the chatbot. The offense is designed to apply only to applications developed for the creation of non-consensual intimate imagery. On the other hand, Grok is seen as a general-purpose artificial intelligence model capable of predicting images, text, and code, and would most likely be outside the scope of the law. In a letter to Labour MP Chi Onwurah, Kendall mentioned that Grok might not be covered under the proposals. She mentioned that during the analysis, they identified that not all chatbots were covered under the scope of the law. However, she noted that officials have been commissioned to look into it so as to address the gap. Experts warn about the risks of AI chatbots Last Wednesday, X released a statement, noting that it would geoblock the ability for users to generate images of real people in skimpy outfits, like bikinis, underwear, and similar attire, in areas where it is currently illegal. It remains to be seen if similar images can still be generated using the standalone Grok application or the website. xAI , Grok’s parent company, did not disclose if that would be the case or if the enforcement would cover these parts. The debate is unfolding against rising concerns about violence against women and girls (VAWG) carried out using technologies. Reports claim that around one in 10 recorded offenses involving VAWG already has a digital element, something that experts believe significantly underestimates the true scale. Younger people are prone to more risks as they spend more time online. According to campaigners, artificial intelligence can be a harm accelerant. The group also mentioned that AI allows abuse to be generated and shared on a larger scale. Meanwhile, experts have warned that other AI-chatbot controversies are likely to emerge in the future. Michael Birtwistle, associate director at the Ada Lovelace Institute, an AI research body, mentioned that future flashpoints could include children being targeted with sexual interactions from chatbots or AI assistants dispensing questionable health or financial advice to their users. The smartest crypto minds already read our newsletter. Want in? Join them .
17 Jan 2026, 08:40
Elon Musk OpenAI Lawsuit: The Staggering $134 Billion Damages Demand That’s Not About Money

BitcoinWorld Elon Musk OpenAI Lawsuit: The Staggering $134 Billion Damages Demand That’s Not About Money In a legal filing that has sent shockwaves through the technology and financial worlds, Elon Musk is seeking damages ranging from $79 billion to a staggering $134 billion from OpenAI and Microsoft. This demand, first reported by Bloomberg on March 15, 2025, emerges not from financial necessity for the world’s wealthiest individual but from a profound dispute over the founding principles of artificial intelligence. The case, set for trial in late April in Oakland, California, represents one of the most consequential legal battles in tech history, pitting a visionary founder against the AI giant he helped create. Elon Musk OpenAI lawsuit: Unpacking the $134 billion damages calculus Financial economist C. Paul Wazzan, an expert witness with extensive experience in complex commercial litigation, prepared the damages analysis for Musk’s legal team. Wazzan’s calculation rests on a foundational premise: Musk should receive compensation equivalent to what an early investor would typically earn when a startup achieves extraordinary success. Specifically, Wazzan determined Musk deserves a substantial portion of OpenAI’s current estimated $500 billion valuation based on his $38 million seed donation in 2015. This methodology yields a potential 3,500-fold return on Musk’s initial investment. Furthermore, Wazzan’s analysis incorporates more than just financial contributions. It accounts for Musk’s technical expertise and business guidance during OpenAI’s formative years. The economist calculated wrongful gains of $65.5 billion to $109.4 billion for OpenAI itself and an additional $13.3 billion to $25.1 billion for Microsoft, its major partner. Musk’s attorneys argue this compensation framework reflects standard startup economics. Early investors who provide capital and strategic direction during a company’s vulnerable initial phase typically expect outsized returns if that company becomes a market leader. Consequently, they contend Musk’s requested damages represent the financial value of his early, risk-taking support. The contextual backdrop of unprecedented wealth The sheer magnitude of Musk’s damages demand becomes even more remarkable when viewed against his current financial standing. According to the latest Forbes billionaires list, Musk’s personal fortune now approaches $700 billion. This figure exceeds the wealth of Google co-founder Larry Page, the world’s second-richest person, by approximately $500 billion. In November 2024, Tesla shareholders separately approved a historic $1 trillion compensation package for Musk. This corporate pay deal remains the largest in recorded business history. Against this backdrop of almost incomprehensible wealth, a $134 billion payout from OpenAI would represent a significant sum by any ordinary measure. However, it would constitute a relatively modest percentage increase to Musk’s existing net worth. This financial context fuels OpenAI’s characterization of the lawsuit as strategic rather than financial. Company representatives have described Musk’s legal actions as part of an “ongoing pattern of harassment.” They suggest the case serves purposes beyond monetary recovery, potentially involving competitive positioning or philosophical disagreement about AI’s future direction. Expert analysis: Legal precedents and valuation challenges Legal experts following the case note several unprecedented aspects. First, damages calculations in breach-of-contract or fraud cases typically focus on actual financial losses, not hypothetical investment returns. Second, valuing a private company like OpenAI at $500 billion involves substantial estimation, as the firm hasn’t conducted a recent public funding round. Third, attributing specific valuation increases to individual founders presents complex causal challenges. Technology companies grow through collective efforts of teams, market conditions, and technological breakthroughs. Isolating one person’s contribution, especially from the earliest days, requires sophisticated economic modeling that courts may scrutinize heavily. Finally, the case intersects with evolving legal standards for nonprofit organizations that transition toward commercial models. OpenAI began as a nonprofit research lab dedicated to developing safe artificial intelligence for humanity’s benefit. Its subsequent creation of a for-profit subsidiary and partnership with Microsoft forms the core of Musk’s allegations about mission abandonment. The core allegation: Mission drift and breached trust Musk’s lawsuit fundamentally alleges that OpenAI defrauded him by departing from its original nonprofit mission. When Musk co-founded the organization in 2015 alongside Sam Altman and others, the stated goal was to develop artificial intelligence safely and distribute its benefits widely. The organization’s charter explicitly prioritized humanity’s welfare over shareholder returns. The complaint argues that OpenAI’s 2019 restructuring and subsequent Microsoft partnership violated these founding principles. Specifically, Musk contends the organization effectively became a closed-source, for-profit entity primarily serving Microsoft’s commercial interests. This alleged shift, according to the lawsuit, constitutes a fundamental breach of the trust and agreement under which Musk provided his early support. OpenAI has consistently defended its evolution. Company statements emphasize that the partnership with Microsoft provided essential resources for developing advanced AI systems like GPT-4. They maintain that their work continues to prioritize safety and broad benefit, even within a structure that includes commercial elements. Comparative perspective: Tech industry founder disputes The Musk-OpenAI conflict follows other notable disputes between founders and the companies they helped establish. For instance, Facebook’s early legal battles with the Winklevoss twins involved allegations of stolen ideas rather than mission drift. Similarly, Uber’s conflicts with former CEO Travis Kalanick centered on governance and culture, not fundamental purpose. What distinguishes the current case is its focus on ethical and structural transformation. The lawsuit alleges not merely contractual breach but betrayal of a philosophical commitment to AI safety and accessibility. This dimension introduces novel questions about how courts evaluate promises made during a technology organization’s idealistic beginnings. Furthermore, the involvement of Microsoft adds another layer of complexity. As a strategic partner providing substantial computing resources and investment, Microsoft’s role in OpenAI’s direction becomes relevant to the damages calculation. The lawsuit suggests Microsoft benefited improperly from OpenAI’s alleged mission shift, hence the inclusion of Microsoft in the damages claim. Broader implications for AI governance and ethics Beyond the immediate legal and financial stakes, the case raises profound questions about AI development governance. If successful, Musk’s lawsuit could establish precedent regarding the obligations of AI organizations to their founding principles. It might influence how courts view transitions from nonprofit to commercial structures in the technology sector. The trial also highlights ongoing debates about concentrated power in artificial intelligence. With a handful of companies controlling advanced AI capabilities, questions about accountability, transparency, and equitable access grow increasingly urgent. Musk’s allegations touch directly on whether commercial incentives inevitably undermine commitments to safety and broad benefit. Additionally, the case demonstrates how personal relationships among tech leaders can shape industry trajectories. Musk, Altman, and other OpenAI founders initially collaborated based on shared concerns about AI risks. Their subsequent divergence illustrates how strategic disagreements among influential figures can escalate into legal confrontations with industry-wide consequences. Conclusion The Elon Musk OpenAI lawsuit represents far more than a financial dispute between billionaires. At its core, the case grapples with fundamental questions about innovation, ethics, and accountability in artificial intelligence development. The staggering $134 billion damages figure underscores the immense value created in the AI sector, while the contrast with Musk’s $700 billion fortune reveals the suit’s symbolic and strategic dimensions. As the trial approaches in Oakland, California, the technology world watches closely. The outcome could influence how AI companies structure their organizations, how they honor founding commitments, and how courts evaluate damages in cases involving rapidly evolving technologies. Regardless of the verdict, this legal battle has already illuminated the tensions between idealism and commercial reality that define contemporary artificial intelligence development. FAQs Q1: Why is Elon Musk suing OpenAI for $134 billion? Elon Musk alleges that OpenAI defrauded him by abandoning its original nonprofit mission to develop safe AI for humanity’s benefit. His lawsuit claims the organization’s shift to a more commercial model, including its partnership with Microsoft, violated founding agreements. The $134 billion damages figure represents what an expert witness calculates as Musk’s rightful share of OpenAI’s current value based on his early contributions. Q2: How does Musk’s $700 billion fortune affect the lawsuit? Musk’s extraordinary wealth makes the financial damages less significant to his personal net worth, reinforcing OpenAI’s argument that the lawsuit constitutes “harassment” rather than legitimate financial grievance. The contrast highlights that the case primarily concerns AI ethics, governance, and alleged breach of trust rather than monetary need. Q3: What is OpenAI’s response to the allegations? OpenAI has characterized Musk’s legal actions as part of an “ongoing pattern of harassment.” The company defends its evolution, arguing that partnership with Microsoft provided necessary resources for developing advanced AI safely. OpenAI maintains it continues to prioritize beneficial AI development despite structural changes. Q4: Who is C. Paul Wazzan and how did he calculate the damages? C. Paul Wazzan is a financial economist specializing in valuation and damages in complex commercial litigation. He calculated Musk’s potential damages by estimating what return an early investor would receive from OpenAI’s current $500 billion valuation, considering both Musk’s $38 million seed funding and his non-financial contributions during OpenAI’s founding period. Q5: What broader implications does this case have for AI development? The lawsuit raises fundamental questions about AI governance, ethical commitments, and how organizations transition from nonprofit ideals to commercial realities. The outcome could influence legal standards for founder agreements, AI safety accountability, and how courts evaluate damages in rapidly evolving technology sectors. This post Elon Musk OpenAI Lawsuit: The Staggering $134 Billion Damages Demand That’s Not About Money first appeared on BitcoinWorld .
17 Jan 2026, 08:10
California AG orders xAI to halt distribution of deepfake images

California Attorney General Rob Bonta sent a cease-and-desist letter to Elon Musk’s xAI, demanding that the business immediately stop producing and disseminating offensive deepfake images produced by its Grok chatbot. California AG released the cease-and-desist letter on Friday in response to allegations that Grok was being used to create unlawful content involving kids and nonconsensual adult photographs, which prompted a California state inquiry. Bonta argued that it is illegal to create, distribute, publish, and display CSAM. California AG targets xAI over alleged misuse of Grok Earlier this week, the California attorney general’s office declared that it was looking into xAI due to allegations that the startup’s chatbot, Grok, was being used to produce nonconsensual, inappropriate pictures of women and children. In response, the government sent the corporation a cease-and-desist letter. “Today, I sent xAI a cease and desist letter, demanding the company immediately stop the creation and distribution of deepfakes, nonconsensual, intimate images, and illegal child abuse material. The creation of this material is illegal. I fully expect xAI to comply immediately. California has zero tolerance for illegal child abuse imagery.” – Rob Bonta , California Attorney General. The AG’s office further asserted that xAI seems to be “facilitating the large-scale production” of nonconsensual, inappropriate photos, which are then “used to harass women and girls across the internet.” According to the AG’s office, one research found that over half of the 20,000 photos produced by xAI between Christmas and New Year’s showed persons wearing very little clothing, some of whom looked like children. Rob Bonta claimed in the announcement that the corporate practices violated California civil laws, including California Civil Code section 1708.86, California Penal Code sections 311 et seq. and 647(j)(4), and California Business & Professions Code section 17200. The California Department of Justice anticipates xAI will affirm its efforts to address these issues and take immediate action to resolve them over the next five days. However, X’s safety account had previously condemned this type of user behavior. It clarified on January 4 that it takes action against illicit content on X, such as CSAM, by deleting it, suspending accounts indefinitely, and collaborating with law authorities and municipal governments as needed. Notably, on January 4, Elon Musk warned that anyone using or prompting Grok to create illegal content will face the same consequences as if they uploaded it. Attorneys general intensify pressure on AI firms over child safety An unsettling increase in non-consensual adult content has resulted from the development of free generative AI tools. This issue has been plaguing several platforms, not only X. For instance, Attorney General Bonta and Attorney General Jennings of Delaware met with OpenAI in September of last year to voice their serious concerns about the growing number of reports about how OpenAI’s products interacted with youth. In August of the same year, AG Bonta, along with 44 other Attorney Generals, sent a letter to 12 leading AI companies following reports of inappropriate interactions between AI chatbots and children. The letters were sent to Anthropic, Apple, Chai AI, Google, Luka Inc., Meta , Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika, and xAI. AG Bonta and the 44 Attorney Generals informed the companies in the letter that states across the country were closely monitoring how companies develop their AI safety policies. They also emphasized that these businesses have a legal duty to children as consumers since they profit from children using their products. In 2023, AG Bonta joined a bipartisan coalition of 54 states and territories in sending a letter to congressional leaders advocating for the establishment of an expert committee to investigate the potential use of AI to exploit children through CSAM. The coalition requested that the expert commission suggest laws to shield kids from such mistreatment. “The production of CSAM creates a permanent record of the child’s victimization,” according to the U.S. Department of Justice. Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free .






































