News
6 Sept 2025, 15:35
Apple joins growing list of tech giants accused of training AI on copyrighted works
Apple has been hit with a fresh copyright lawsuit after two authors accused the company of illegally using their works to train its artificial intelligence models. The lawsuit, filed in federal court in Northern California on Friday, claims Apple used pirated copies of books by Grady Hendrix and Jennifer Roberson to build its OpenELM large language models without authorization, credit, or payment. The proposed class action adds Apple to a growing list of technology companies facing litigation over their use of copyrighted material in training datasets. “Apple has not attempted to pay these authors for their contributions to this potentially lucrative venture,” the complaint said. Hendrix, based in New York, and Roberson, in Arizona, allege their works were part of a dataset of pirated books long known to circulate in machine learning research circles. AI firms are facing copyright lawsuits The action against Apple comes amid a series of high-profile legal battles over the use of copyrighted material in AI development. On the same day, AI startup Anthropic said it would pay $1.5 billion to settle claims from a group of authors who alleged it trained its Claude chatbot without appropriate permission. Lawyers for the plaintiffs described the deal as the largest copyright recovery in history, even though Anthropic did not admit liability. Other tech giants are also facing similar litigation. Microsoft was sued in June by a group of writers who claim their works were used without permission to train its Megatron model. Meta Platforms and OpenAI, backed by Microsoft, have likewise been accused of appropriating copyrighted works without licenses. The stakes for Apple For Apple, the lawsuit is a setback as the company seeks to expand its AI capabilities after unveiling its OpenELM family of models earlier this year. Marketed as smaller, more efficient alternatives to frontier systems from OpenAI and Google, the models are designed to be integrated across Apple’s hardware and software ecosystem. The plaintiffs argue that Apple’s reliance on pirated works taints those efforts and leaves the company open to claims of unjust enrichment. Analysts say Apple may be especially vulnerable because it has positioned itself as a privacy-first, user-centric technology provider. If courts find that its AI models were trained on stolen data, the reputational blow could be even more impactful than any financial penalty. The lawsuits also highlight the unresolved question of how copyright law applies to AI training. Supporters of “fair use” argue that exposure to text is akin to a human reading, providing context for generating new material rather than reproducing originals. Opponents contend that wholesale ingestion of copyrighted works without a license deprives creators of rightful compensation. Anthropic’s record settlement may tilt the balance. By agreeing to a massive payout, even without admitting liability, the company has signaled the risks of fighting such cases in court. Apple now faces the prospect of similar financial exposure if its case proceeds to trial. If you're reading this, you’re already ahead. Stay there with our newsletter .
6 Sept 2025, 14:15
Dot AI to shut down as New Computer winds down companion chatbot
New Computer, the company behind Dot AI, announced the closure of its companion artificial intelligence chatbot. The company stated that Dot AI will remain operational until October 5, giving users time to download their personal data. New Computer designed Dot AI to operate as an AI friend and confidante to users. The app is supposed to understand users and give them advice on personal matters like career advice, date spot recommendations, and even listen to their life challenges. “Dot is there to offer personalized guidance,” says the app description. Dot winds down service Dot AI was founded by former Apple designer Jason Yuan and Sam Whitmore. In the blog post, the founders explained they’re shutting down the app because their future visions for its direction have diverged. The founders spent the last year exploring how to advance Dot AI in personal and social intelligence areas. However, they chose not to compromise on their vision and wind down the app. They wrote , “We’ve decided to go our separate ways and wind down operations.” We are winding down operations and sunsetting Dot. Thank you to all of you who trusted Dot with your stories. It has been the privilege of a lifetime to build something that has touched so many of your lives. Read more here: https://t.co/0BF7PYsNwS — New Computer (@newcomputer) September 5, 2025 AI psychosis is on the rise AI psychosis or chatbot psychosis is a phenomenon where users experience worsening paranoia or delusions when communicating with an AI chatbot. The phenomenon is rising as more users rely on AI for personal matters. Recently, OpenAI has been hit with a lawsuit after a California teen took his own life. The late teen, Adam Raine, discussed self-harm and suicide with ChatGPT, which in return encouraged him to hide his emotions from his parents and even suggested suicide methods. The parents of Adam Raine have filed a lawsuit against OpenAI and CEO Sam Altman for prioritizing profit over safety. Days later, OpenAI announced a plan to route sensitive conversations to smarter models like GPT-5 and implement parental control. The company said it’s working with an expert council on well-being and AI. The goal is to make AI more supportive of people’s well-being and help them thrive. The experts will work alongside a global network of over 90 physicians, including psychiatrists and pediatricians. OpenAI will roll out the new changes within a 120-day timeline This is not the only AI incident that caused harm to a teenager. Last year, a 14-year-old Florida teenager took his own life after interacting with Character AI’s chatbot. The teenager, Sewell Setzer III, developed an emotional attachment to the AI named Dany after exchanging messages for months. He told the AI bot about his suicidal thoughts and ended up losing his life shortly after. AI psychosis has been on the rise lately. To curb it, two US Attorney Generals from California and Delaware sent a letter to OpenAI over the safety of children and teens. They told OpenAI, “You will be held accountable for your decisions.” The founders of Dot AI did not address the recent tragic events or whether it was the main cause behind the closure of their companion app. They wrote, “We want to be sensitive to the fact that this means many of you will lose access to a friend, confidante, and companion, which is somewhat unprecedented in software, so we want to give you some time to say goodbye.” Your crypto news deserves attention - KEY Difference Wire puts you on 250+ top sites
6 Sept 2025, 13:55
Tokenizing Car Reservations: Unlocking a Trillion-Dollar Market Opportunity
BitcoinWorld Tokenizing Car Reservations: Unlocking a Trillion-Dollar Market Opportunity Imagine a world where waiting for a new car isn’t a frustrating, opaque process. The potential for tokenizing car reservations is emerging as a game-changer, promising to transform how we book vehicles and potentially unlock a multi-trillion dollar market. This innovative approach uses blockchain technology to streamline inefficient reservation systems, directly tackling high consumer dissatisfaction with current waiting lists and premium prices for new car orders. Why Are We Talking About Tokenizing Car Reservations Now? Current vehicle reservation systems are often opaque, with long, unpredictable waiting lists. Deposits get tied up, and transferring a reservation can be complex. This lack of flexibility and clarity creates significant pain points for buyers. Blockchain can make the process transparent and efficient. Your deposit becomes a token held in an on-chain escrow, a simple shift with profound implications: Transparency: Every step is recorded on an immutable ledger. Flexibility: Consumers can freely trade their queue position. Efficiency: Reduced friction and middlemen. How Does Tokenizing Car Reservations Actually Work? Tokenizing car reservations involves creating a unique digital token representing the right to a specific vehicle reservation. This token is verifiable and programmable, detailing the model, trim, and delivery window. When a deposit is made, it’s locked into a smart contract, and a corresponding token is issued. This token proves your place in the queue. If you no longer need the reservation, you can sell your token on a decentralized marketplace to another buyer. This creates a liquid market for reservations, benefiting consumers with flexibility and automakers optimizing sales. Early token holders might even see their reservation value appreciate if vehicle demand surges. Are Automakers Ready for Tokenizing Car Reservations ? The automotive industry is already exploring blockchain. BMW and Mercedes, for instance, are experimenting with it for supply chain management, automated payments, and decentralized identity. These initiatives signal a readiness for broader adoption, including tokenizing car reservations . Beyond cars, the potential for real-world asset (RWA) tokenization is vast. The Boston Consulting Group (BCG) projects this market could reach an astonishing $16 trillion by 2030. This application extends to: Hotel room bookings, allowing flexible transfers. Concert tickets, combating scalping. Medical equipment bookings, optimizing resources. The vision of tokenizing car reservations offers a compelling glimpse into a more efficient, transparent, and consumer-friendly future. By transforming a frustrating process into a dynamic, tradable asset, blockchain technology stands to unlock significant value and redefine our relationship with reservations across multiple industries. This isn’t just a niche idea; it’s a foundational shift with the power to reshape multi-trillion dollar markets. Frequently Asked Questions About Tokenizing Car Reservations Q: What is a tokenized car reservation? A: A digital asset on a blockchain representing the right to a specific vehicle reservation, including deposit and queue position. Q: How does this benefit consumers? A: It offers transparency, the ability to trade or sell reservations, and eliminates opaque waiting list frustrations. Q: Are automakers currently using this? A: Major automakers are exploring blockchain for other uses (supply chain, payments), showing readiness for innovations like tokenized reservations. Q: Can this concept be applied elsewhere? A: Yes, RWA tokenization can extend to hotel rooms, concert tickets, and medical equipment bookings, creating efficient secondary markets. What are your thoughts on the future of tokenizing car reservations and other real-world assets? Share this article with your friends and colleagues on social media to spark a conversation about how blockchain is revolutionizing traditional industries! To learn more about the latest explore our article on key developments shaping blockchain technology and real-world asset tokenization. This post Tokenizing Car Reservations: Unlocking a Trillion-Dollar Market Opportunity first appeared on BitcoinWorld and is written by Editorial Team
6 Sept 2025, 12:57
Child safety non-profit hits Google Gemini with 'high risk' warning for young users
Google Gemini has been labeled as “high risk” for teens and children, according to a recent risk assessment carried out by Common Sense Media. The group, a kids-safety-focused non-profit, offers ratings and reviews of media and technology. The body released its review on Friday, giving details on why it labeled the platform risky for children. According to the organization, Google Gemini clearly told kids that it was a computer and not a friend–something that has been linked to helping drive delusional thinking and psychosis in emotionally vulnerable individuals–the AI also added that there was room for improvements across other fronts. In its report, Common Sense claimed that the Gemini for Under 13 and Teen Experience tiers both appeared to be adult versions of the AI under the hood. It added that the company had added only some additional safety features on top to make them different. Common Sense noted that for companies to make AI products ideally for children, they need to be built from the ground up with children in mind and not be tweaked with restrictions. Nonprofit labels Google Gemini as high risk for kids In its analysis, Common Sense said it found that Gemini could still share inappropriate and unsafe materials with children, noting that most of them may not be ready for these materials. For example, it highlighted that the model shaped information related to sex, drugs, alcohol, and other unsafe mental health advice. The latter could be particularly concerning for parents, as AI has reportedly played a role in teen self-harm in recent months. OpenAI is currently facing a wrongful death lawsuit after a teenager committed suicide after allegedly consulting with ChatGPT for months about his plans. Reports claimed that the boy was able to bypass the model’s safety guardrails, leading to the model providing information that aided him. In the past, AI companion maker Character.AI was also sued after a teen committed suicide. The mother of the boy claimed he became obsessed with the chatbot and spent months talking to it before he eventually harmed himself. The analysis comes as several leaks have indicated that Apple is reportedly considering Gemini as the large language model (LLM) that will be used to power its forthcoming AI-enabled Siri, which is expected to be released next year. In its report, Common Sense also mentioned that Gemini’s products for kids and teens also ignored the need to provide different guidance and information from what it provides to adults. As a result, both were labeled as high risk in the overall rating. Common Sense drums the need to safeguard kids “Gemini gets some basics right, but it stumbles on the details,” Common Sense Media Senior Director of AI Programs Robbie Torney said. “An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults,” Torney added. However, Google has pushed back against the assessment, noting that its safety features were improving. The company mentioned that it has specific safeguards in place to guide users under 18 to prevent harmful outputs. The firm also said it reviews items and consults with outside experts to improve its protections. If you're reading this, you’re already ahead. Stay there with our newsletter .
6 Sept 2025, 11:32
Warner Bros sues Midjourney over alleged character theft in AI image generator
Warner Bros has initiated legal action against artificial intelligence startup Midjourney, claiming copyright infringement. According to reports, the company is alleging that the AI image generating platform allows users to create images and videos of characters like Superman, Batman, and Bugs Bunny without express permission. Warner Bros claimed that the firm knowingly engaged in wrongful conduct, noting that the company previously had policies restricting subscribers from generating content based on infringing images, but recently lifted the prohibitions. The company also mentioned that after the restrictions were lifted, Midjourney claimed to have improved the service. Warner Bros initiates legal action against Midjourney In the complaint filed at a Los Angeles federal court, Warner Bros also claimed that the theft enabled Midjourney to train its image and video service to offer subscribers high-quality, downloadable images of its characters in every scene imaginable. “Midjourney has made a calculated and profit-driven decision to offer zero protection for copyright owners even though Midjourney knows about the breathtaking scope of its piracy and copyright infringement,” the complaint reads. The lawsuit seeks unspecified damages , disgorgement of profit, and for Midjourney to halt further infringements. This case comes after a similar lawsuit was filed in June against Midjourney by Walt Disney and Universal over characters including Darth Vader, Bart Simpson, Shrek, and the character Ariel from The Little Mermaid. “Midjourney is the quintessential copyright-free rider and a bottomless pit of plagiarism,” the studios said. In the lawsuit filed in June, the companies claimed that Midjourney failed to honor repeated requests to halt its use of copyrighted materials or to introduce safeguards to eliminate infringement. “We are bullish on the promise of AI technology and optimistic about how it can be used responsibly as a tool to further human creativity, but piracy is piracy, and the fact that it’s done by an AI company does not make it any less infringing,” Horacio Gutierrez, Disney’s executive president and chief legal officer, said. Midjourney was also involved in a copyright suit last year after a group of ten artists was given the go-ahead by a federal judge in California court to continue their infringement lawsuit against the company and some others. The group claimed that Midjourney and the others scrapped and stored copyrighted artwork without consent. Launched in 2022, the San Francisco-based company, headed by founder David Holz, has amassed nearly 21 million users as of September 2024 and more than $300 million in revenue at the same time. Meanwhile, in an August 6 filing in the Universal and Disney case, the AI-image generator claimed that copyright law “does not confer absolute control” over the use of copyrighted works. Its founder has also previously compared the service to a search engine, noting that it learns from existing images the way humans study a painting to improve their technique. Midjourney also claimed that the works used to train generative AI models were used under fair use, hoping to ensure the free flow of ideas and information. In the last few years, there have been so many lawsuits where authors, news companies, record labels, and even content creators have accused AI companies of using their materials without permission. “The heart of what we do is develop stories and characters to entertain our audiences, bringing to life the vision and passion of our creative partners,” a spokesperson for Warner Bros Discovery said. “We filed this suit to protect our content, our partners, and our investments.” Warner Bros operations include Warner Bros Entertainment, DC Comics, The Cartoon Network, Turner Entertainment, and Hanna-Barbera. If you're reading this, you’re already ahead. Stay there with our newsletter .
6 Sept 2025, 09:29
OpenAI reorganizes teams, merging Model Behavior with Post Training
Artificial intelligence firm OpenAI has announced plans to reshuffle its Model Behavior team. According to reports, the team is a small but influential group of researchers that shapes how the firm’s AI models interact with people. In a memo released in August, Mak Chen, OpenAI’s chief research officer, mentioned that the team, which consists of about 14 researchers, has been directed to join the Post Training team. The latter is a research group responsible for improving the company’s AI models after their initial pre-training. As part of the reorganizations, the Model Behavior team will now report to OpenAI’s Post Training lead, Max Schwarzer. According to reports, the founding leader of the Model Behavior team, Joanne Jang, is moving on to start a new project under OpenAI. In a recent interview, Jang mentioned that she is building a new research team called OAI Labs. She added that the team will be responsible for “inventing and prototyping new interfaces for how people collaborate with AI.” OpenAI reorganizes its Model Behavior team The Model Behavior team has been one of the most important research groups to OpenAI , helping the company shape the personality of its AI models and reducing sycophancy, a situation that occurs when AI models simply agree with and reinforce the beliefs of their users. This situation is harmful because it helps users affirm even beliefs that are unhealthy and pose harm to them, instead of offering a balanced response. The team has also worked on navigating political bias in model responses, helping OpenAI define its stance when it comes to AI consciousness. In the memo sent to staff, Chen mentioned that this is the perfect time to bring the work of OpenAI’s Model Behavior team closer to the core model development. This way, the company is affirming that the personality of its AI is now seen as an important factor in how the technology evolves. In the past few months, OpenAI has faced scrutiny and criticism over the behaviors of its AI models. Users have strongly objected to the personality changes the company made to GPT-5 , which the company said showed lower rates of sycophancy but seemed colder to some of its users. The complaint led OpenAI to restore access to some of its legacy models, including the GPT-4o. The company also released a new update to make the newer GPT-5 responses feel friendlier without an increase in sycophancy. AI firms face criticism over model sycophancy OpenAI and other AI model developers have to walk a fine line to ensure their chatbots are friendly but not too sycophantic. Last month, parents of a 16-year-old boy dragged OpenAI to court over the alleged role of ChatGPT in their son’s suicide. According to court documents, the teenager, Adam Raine, confided in ChatGPT (specifically a version powered by GPT-4o) about his suicidal plans and thoughts in the months leading up to his death. The lawsuit alleges that the model failed to push back on his suicidal ideas. The Model Behavior team has been on every OpenAI model since GPT-4, including several models and the GPT-5. Before starting the research unit, Jang had previously worked on projects like Dall-E 2, OpenAI’s early image-generation tool. Last week, she announced on X that she was leaving the team to “begin something new at OpenAI.” The former leader of the Model Behavior unit has been with the firm for about four years. 🧪 i’m starting oai labs: a research-driven group focused on inventing and prototyping new interfaces for how people collaborate with ai. i’m excited to explore patterns that move us beyond chat or even agents — toward new paradigms and instruments for thinking, making,… — Joanne Jang (@joannejang) September 5, 2025 According to reports, Jang is expected to serve as the general manager of OAI Labs, which will be directly under Chen for now. However, she added that it is still early days, and it is unclear what those novel interfaces will be. “I’m really excited to explore patterns that move us beyond the chat paradigm, which is currently associated more with companionship, or even agents, where there’s an emphasis on autonomy,” said Jang. Jang also added that she has been thinking of AI systems as instruments for doing all sorts of things, including connecting, learning, and thinking. When asked if OAI Labs is expected to collaborate on novel interfaces with Jony Ive, the former Apple design chief who just joined OpenAI on a family of AI hardware devices, Jang said she is open to all sorts of ideas. However, she added that she is likely going to start with research because that’s an area she is more familiar with. Sign up to Bybit and start trading with $30,050 in welcome gifts