ChatGPT-maker U-turns on threat to leave EU over AI law
Sam Altman, the CEO of OpenAI, has assured that the company has no plans to leave Europe, retracting a previous threat made earlier this week. Altman had previously indicated that OpenAI might consider exiting the region if complying with forthcoming artificial intelligence (AI) regulations becomes excessively challenging.
The European Union (EU) is preparing legislation that could potentially become the first regulatory framework for AI, a move that Altman had criticized as “over-regulating.” However, after facing significant media attention for his remarks, Altman backtracked on his earlier statements.
Taking to Twitter, Altman clarified, “We are excited to continue to operate here and of course have no plans to leave.”
The proposed legislation entails requirements for generative AI companies to disclose the copyrighted material utilized in training their systems to produce text and images. Members of the creative industries have accused AI firms of using the creative works of artists, musicians, and actors to train algorithms that imitate their output.
Altman expressed concerns that certain safety and transparency obligations outlined in the AI Act would present technical challenges that OpenAI might find insurmountable, as reported by Time magazine.

Sam Altman, during a gathering at University College London, shared his positive outlook on the potential of artificial intelligence (AI) to create new employment prospects and mitigate inequality. The event provided a platform for discussions on various aspects of AI, including its risks such as disinformation, national security, and even existential threats. Altman, along with Prime Minister Rishi Sunak and executives from AI firms DeepMind and Anthropic, deliberated on the necessary measures, both voluntary and regulatory, to effectively manage these risks.
While concerns exist among experts regarding the possibility of super-intelligent AI systems posing a threat to humanity’s survival, Sunak countered by highlighting the transformative impact AI could have on society. He emphasized the potential for AI to yield positive outcomes and enhance public services, presenting emerging opportunities across a wide range of areas.

At the G7 summit in Hiroshima, the leaders of the United States, United Kingdom, Germany, France, Italy, Japan, and Canada emphasized the importance of international cooperation in building “trustworthy” artificial intelligence (AI). They recognized that regulating AI should be a global endeavor and pledged to work together towards that goal.
Ahead of the implementation of any AI regulations in the European Union (EU), the European Commission is aiming to establish an AI pact in collaboration with Alphabet, the parent company of Google. Thierry Breton, the EU industry chief, held discussions with Sundar Pichai, CEO of Google, in Brussels, emphasizing the need for proactive efforts in developing an AI pact voluntarily, prior to the legal deadlines.
Thierry Breton further emphasized the significance of international cooperation in regulating AI. He expressed the urgency to work together with AI developers to formulate a comprehensive set of metrics that can be regularly and consistently reported to regulators and the public. The aim is to establish transparency and accountability, ensuring that regulatory institutions have the necessary tools to enforce compliance.
Tim O’Reilly, a Silicon Valley veteran, author, and founder of O’Reilly Media, highlighted the importance of mandating transparency and establishing regulatory institutions to ensure accountability in the AI field. He stressed the need for collaboration among companies developing advanced AI technologies to define a comprehensive set of metrics that can be regularly reported to regulators and updated as new best practices emerge. O’Reilly cautioned against AI fearmongering and emphasized the need for a practical and measured approach to regulation.
ChatGPT: US lawyer admits using AI for case research
A lawyer in New York is currently undergoing a court hearing as his law firm used an AI tool called ChatGPT for legal research purposes.
The judge expressed that the court was confronted with an “unprecedented situation” when it was discovered that a filing referenced example legal cases that did not actually exist.
The lawyer who employed the AI tool informed the court that he was “unaware that its content could be inaccurate.”
ChatGPT is a tool that generates original text upon request, but it comes with warnings acknowledging its potential to “produce inaccurate information.”
The original case involved a man suing an airline over an alleged personal injury. In an attempt to establish why the case should proceed based on legal precedent, the plaintiff’s legal team submitted a brief citing several previous court cases.
However, the airline’s lawyers later notified the judge that they were unable to locate several of the cases mentioned in the brief.
Judge Castel stated in an order demanding an explanation from the plaintiff’s legal team, “Six of the cited cases appear to be fabricated judicial decisions with falsified quotes and fictitious internal citations.”
Through subsequent filings, it was revealed that the research was not conducted by Peter LoDuca, the plaintiff’s lawyer, but by a colleague at the same law firm. Steven A Schwartz, an attorney with over 30 years of experience, used ChatGPT to search for similar prior cases.
In his written statement, Mr. Schwartz clarified that Mr. LoDuca had no involvement in the research and was unaware of how it was conducted.
Mr. Schwartz expressed deep regret for relying on the chatbot, emphasizing that he had never used AI for legal research before and was unaware of the potential for inaccurate content.
He pledged never to employ AI to “supplement” his legal research in the future without absolute verification of its authenticity.
Attached screenshots in the filing indicate a conversation between Mr. Schwartz and ChatGPT.
One message reads, “Is Varghese a real case?” referring to Varghese v. China Southern Airlines Co Ltd, one of the cases that no other lawyer could find.
ChatGPT responds affirmatively, prompting “S” to ask, “What is your source?”
After “double checking,” ChatGPT confirms once again that the case is real and can be found in legal reference databases such as LexisNexis and Westlaw.
It asserts that the other cases it provided to Mr. Schwartz are also genuine.
Both lawyers, who are affiliated with the law firm Levidow, Levidow & Oberman, have been instructed to provide an explanation as to why they should not face disciplinary action at a hearing scheduled for June 8.
Since its launch in November 2022, millions of individuals have utilized ChatGPT. The tool can respond to inquiries using natural, human-like language and can mimic various writing styles. It relies on the internet as it existed in 2021 as its database.
Concerns have been raised about the potential risks associated with artificial intelligence (AI), including the dissemination of misinformation and the presence of bias.
Who is Linda Yaccarino, Twitter’s ‘superwoman’?
Linda Yaccarino, a highly respected figure in the advertising industry, faces a pivotal career move as she takes on the leadership role at Twitter. Leaving her prestigious position as head of advertising at one of America’s largest media companies raises the question: why would she risk joining a social media platform like Twitter, known for its checkered business history and an owner notorious for being unpredictable?
According to marketing veteran Lou Paskalis, who has known Yaccarino for over two decades, her personality may hold the key. Paskalis describes her as fierce, shrewd, and ambitious—a true “superwoman.” Given her inclination for tackling challenges head-on, he believes she would eagerly seize the opportunity to step in and proclaim, “I can fix this.”
But the real question lingers: can she?
Even prior to billionaire Elon Musk’s acquisition of Twitter last year, the platform was grappling with numerous issues. It faced vehement criticism from both the left and right for its handling of misinformation and hate speech. Furthermore, the company struggled to achieve consistent profitability, having only recorded annual profits twice since its inception in 2006. User and revenue growth have been sporadic, creating further obstacles.
Since Musk’s takeover, the problems have exacerbated. He implemented substantial layoffs, including teams responsible for monitoring abusive content, altered the account verification process, and stirred controversy with his own tweets, propagating conspiracy theories. Advertisers have departed in droves, while users exhibit a degree of skepticism. A recent Pew poll revealed that six out of ten American adults, Twitter’s largest user base, are taking breaks from the platform, with a quarter indicating that they anticipate not using it within a year.
Even Musk himself has appeared daunted by the challenges, proceeding with his $44 billion acquisition only under the threat of a lawsuit. He has jokingly stated that only a “foolish” individual would desire the chief executive title from him.
Enter Linda Yaccarino, a 60-year-old New York native, holding a telecommunications degree from Penn State. Raised in an Italian-American family, Yaccarino has ascended the ranks within major US media companies, earning a reputation as a formidable and determined executive. She played a pivotal role in navigating the turbulent landscape for entertainment giant NBCUniversal amid the disruptions caused by the rise of technology giants.
Now, as Yaccarino assumes her new role at Twitter, the industry watches with anticipation to see if her expertise and tenacity will prove instrumental in overcoming the challenges and transforming the platform’s trajectory.

Ms Yaccarino, who was previously the chairman of global advertising and partnerships at NBCU, has been instrumental in reshaping the company’s ad sales business and spearheading the launch of its ad-supported streaming platform, Peacock. Her efforts have also sparked industry-wide discussions about data gaps as audiences transition to online platforms.
With a decade-long tenure at NBCU, Ms Yaccarino expressed her desire for a promotion, leading to speculation about her interest in joining Twitter. Her defense of Mr. Musk at a conference last year further fueled the speculation, as she urged critical advertisers to give him a chance.
Following the announcement of her new role at Twitter, Ms Yaccarino tweeted her excitement to be a part of the platform and contribute to its transformation, expressing admiration for Mr. Musk’s vision.
By bringing Ms Yaccarino on board, Mr. Musk has managed to instill trust among advertisers, according to industry experts. Major advertising agency GroupM, representing brands like Coca-Cola and Nestle, has already expressed a diminished sense of risk associated with advertising on Twitter.
However, revitalizing Twitter’s business will be a formidable challenge. Despite the desire of ad buyers to have alternatives to the dominant tech giants, Twitter’s current scale is still insufficient to be considered a must-buy platform, as noted by media analyst Brian Wieser, who now serves as the principal at consultancy firm Madison and Wall.

Linda Yaccarino, a highly regarded advertising executive, is preparing to tackle a multitude of pressing issues as she assumes leadership at Twitter. Alongside the challenges of enhancing advertising strategies, Yaccarino will face regulatory scrutiny regarding hate speech and privacy controls. Moreover, the platform is entangled in lawsuits from landlords, vendors, and former staff members over unpaid bills. User complaints and technical glitches, exemplified by the mishaps during a high-profile interview with Republican presidential candidate Ron DeSantis, further compound the company’s troubles.
The most unpredictable factor is undeniably Elon Musk, who intends to maintain his involvement in overseeing products and technology at Twitter. According to industry expert Brian Wieser, anyone stepping into this role is set up to fail, but Yaccarino seems to have more favorable odds than others.
Friends and former colleagues anticipate that Yaccarino will leverage her background in television to revamp the platform’s advertising business and expand the utilization of video content. She views Musk’s vision of Twitter as an “everything app” with messaging, payments, and other functionalities as a great opportunity for advertisers.
Jacqueline Corbelli, founder and CEO of Brightline, a tech company specializing in streaming adverts, notes Yaccarino’s courage to take significant risks and suggests that integrating successful past strategies with advertisers’ evolving needs can help restore trust in Twitter.
The question remains whether Yaccarino will have the freedom to implement her plans. Some commentators have raised concerns about the “glass cliff” phenomenon, wherein women are often appointed to leadership positions during precarious times. However, Yaccarino has confidently dismissed such analysis, emphasizing her resilience and determination.
Prior to her appointment, Yaccarino engaged Elon Musk at an industry conference, seeking clarification on Twitter’s “freedom of speech, not freedom of reach” concept and how it differed from other companies’ rules. She also inquired whether Musk would rein in his own tweeting, though his response provided little commitment.
Yaccarino’s friends assert that she is entering her new role with a clear understanding of the risks involved. Known for her patience and ability to navigate difficult relationships, she fearlessly confronts challenges head-on. Shelley Zallis, CEO of the Female Quotient and a close confidante, describes Yaccarino as a determined and fearless individual who unites the industry and drives progress forward.
Twitter pulls out of voluntary EU disinformation code
Twitter has chosen to withdraw from the European Union’s voluntary code aimed at combating disinformation, according to EU Internal Market Commissioner Thierry Breton. In a tweet, Breton mentioned that although Twitter has decided to opt out, new legislation will compel the company to adhere to disinformation-fighting obligations. He warned that Twitter cannot evade its responsibilities, emphasizing that compliance will be enforced starting from August 25. Twitter has not officially confirmed its position on the code and has yet to respond to requests for comment.
Numerous tech companies, including Meta (owner of Facebook and Instagram), TikTok, Google, Microsoft, and Twitch, have pledged their commitment to the EU’s disinformation code. Launched in June of the previous year, the code aims to prevent the exploitation of disinformation and fake news, increase transparency, and mitigate the spread of bots and fake accounts. Signatory companies can choose the specific commitments they will make, such as cooperating with fact-checkers and monitoring political advertising.
Under Elon Musk’s leadership, Twitter’s moderation efforts have reportedly been scaled back, leading to concerns about the proliferation of disinformation. Former Twitter employees and experts claim that a significant number of specialists dedicated to combating coordinated disinformation campaigns have either resigned or been laid off. In a recent investigation, the BBC discovered numerous Russian and Chinese state propaganda accounts thriving on Twitter.
While Musk argues that there is now “less misinformation rather than more” since he assumed control in October, the European Union has introduced the Digital Services Act (DSA) to hold platforms accountable for tackling illegal online content. Effective August 25, platforms with over 45 million monthly active users in the EU, including Twitter, will be legally obliged to comply with the DSA regulations. This means Twitter will need to establish a mechanism for users to report illegal content, promptly address notifications, and implement measures to address the dissemination of disinformation.
A European Commission official reportedly stated that if Musk does not take the voluntary code seriously, it would be preferable for him to withdraw from it.
Neuralink: Elon Musk’s brain chip firm wins US approval for human study
Neuralink, the brain-chip firm founded by Elon Musk, has recently received approval from the US Food and Drug Administration (FDA) to commence its first human tests. The company’s ambitious goal is to establish a connection between the human brain and computers, with the aim of restoring vision, mobility, and cognitive abilities to individuals. While Neuralink does not currently have immediate plans to recruit participants, this FDA approval represents a significant milestone, as previous attempts to gain regulatory clearance were rejected on safety grounds.
Neuralink’s microchips, which have already undergone testing on monkeys, are designed to interpret brain signals and transmit information to external devices via Bluetooth technology. By leveraging this technology, Neuralink hopes to address conditions such as paralysis, blindness, and assistive communication for disabled individuals, enabling them to use computers and mobile devices.
Despite the promise of Neuralink’s brain implants, experts caution that extensive testing is required to overcome technical and ethical challenges before the technology can become widely accessible. Safety, accessibility, and reliability are emphasized as top priorities during the company’s engineering process, as stated on their website.
While Neuralink, established in 2016 by Elon Musk, has encountered setbacks and delays in the past, the recent FDA approval marks a significant step forward. The company aims to refine and perfect its brain-chip implants for human use, but the process requires meticulous attention to safety and efficacy. Neuralink’s vision aligns with Musk’s belief that the technology could potentially address concerns about human displacement caused by artificial intelligence.
Neuralink’s announcement about FDA approval follows a noteworthy breakthrough by Swiss researchers who successfully utilized brain implants to wirelessly transmit a paralyzed individual’s thoughts to his legs and feet, enabling him to walk. This achievement further underscores the potential of brain-computer interfaces in improving the quality of life for individuals with disabilities.
ChatGPT-maker U-turns on threat to leave EU over AI law
OpenAI’s CEO, Sam Altman, has clarified that the company has no plans to leave Europe, despite his earlier remarks suggesting otherwise. Altman had previously expressed concerns about the upcoming legislation on artificial intelligence (AI) in the European Union (EU), which he believed to be excessive regulation.
In a recent reversal, Altman took to Twitter to emphasize OpenAI’s commitment to operating in Europe. He stated, “We are excited to continue to operate here and have no plans to leave.”
The proposed EU legislation aims to establish guidelines for AI companies, including the requirement for generative AI systems to disclose any copyrighted material used in their training processes for generating text and images. Critics from the creative industries have accused AI companies of appropriating the work of artists, musicians, and actors to replicate their creations.
However, Altman expressed concerns about the feasibility of OpenAI’s compliance with certain safety and transparency provisions outlined in the AI Act, as reported by Time magazine.

During a gathering at University College London, OpenAI CEO Sam Altman expressed his positive outlook on the impact of AI, stating that it has the potential to generate more employment opportunities and alleviate inequality. He also engaged in discussions with Prime Minister Rishi Sunak, as well as leaders from AI companies DeepMind and Anthropic, to address the risks associated with AI technology.
The conversations encompassed a range of concerns, including the dissemination of disinformation, national security implications, and the potential for “existential threats” posed by advanced AI systems. The focus was on identifying voluntary measures and regulatory frameworks necessary to effectively manage these risks.
While some experts have expressed apprehension about the potential existential risks associated with super-intelligent AI, Mr. Sunak emphasized the positive transformations that AI could bring to humanity. He highlighted the potential for AI to yield improved outcomes for the British public, creating new opportunities across various domains and enhancing public services.

The leaders of the United States, United Kingdom, Germany, France, Italy, Japan, and Canada, gathered at the G7 summit in Hiroshima, have reached a consensus on the need for international collaboration in establishing “trustworthy” artificial intelligence (AI).
In anticipation of forthcoming European Union (EU) regulations, the European Commission intends to forge an AI pact with Alphabet, the parent company of Google. Recognizing the importance of global cooperation, EU industry chief Thierry Breton held discussions with Google CEO Sundar Pichai in Brussels, emphasizing the urgency of proactive collaboration among AI developers to create an AI pact voluntarily before the legal deadline.
Tim O’Reilly, an industry veteran from Silicon Valley, author, and founder of O’Reilly Media, proposed that a productive starting point would involve enforcing transparency and establishing regulatory bodies to ensure accountability. He cautioned against excessive fear and complexity surrounding AI, which may lead to analysis paralysis. O’Reilly emphasized the significance of collaborative efforts among advanced AI companies to establish comprehensive metrics for regular reporting to regulators and the public, while allowing for the incorporation of evolving best practices.
Meta loses millions as made to sell Giphy to Shutterstock
Meta has completed the sale of Giphy, its animated-gif search engine, to Shutterstock for $53 million (£42 million). This comes just three years after Meta acquired Giphy for $400 million. The sale was prompted by an order from the UK’s competition watchdog, which cited concerns about competition in the market.
Giphy serves as the primary provider of animated gifs to popular social networks like Snapchat, TikTok, and Twitter. As part of the deal, Meta’s platforms, including Facebook, Instagram, and WhatsApp, will still have access to Giphy’s content.
Giphy boasts impressive usage statistics, with over 1.3 billion search queries received and its content viewed a staggering 15 billion times on a daily basis. When Meta initially acquired Giphy, it had promised to make it available to other social networks. However, the Competition and Markets Authority (CMA) investigated the acquisition and determined that it would have negative effects on competition within social media and advertising. This led to the CMA ordering Meta to divest Giphy in November 2021, making it the first time the regulator had blocked a deal involving a major Silicon Valley company.
Meta had attempted to appeal the CMA’s decision in September but eventually agreed to comply with the order to sell Giphy in October, albeit with disappointment. Meanwhile, Shutterstock expressed excitement over its acquisition of Giphy. CEO Paul Hennessy highlighted Giphy’s ability to empower users to express themselves through gif and sticker content, while also offering brands opportunities to engage in casual conversations.
Giphy’s extensive library is fueled by contributions from individual artists and companies like Disney and Netflix, ensuring a constant stream of current content that can be shared and incorporated into everyday conversations via social media platforms.
Minister attacks Meta boss over Facebook message encryption plan
Mark Zuckerberg, the CEO of Meta (formerly known as Facebook), has come under fire from Security Minister Tom Tugendhat over the company’s decision to introduce encryption in Facebook messages. Tugendhat criticized Meta for allowing child abusers to operate with impunity due to the implementation of end-to-end encryption (E2EE), which restricts access to messages only to the sender and recipient.
The government has been a longstanding critic of encryption plans, arguing that it hinders law enforcement efforts to combat child sexual abuse and other illegal activities. Meta, however, has stated its commitment to collaborating with law enforcement agencies and child safety experts while implementing encryption technology.
Tugendhat singled out Mark Zuckerberg for his role in these decisions, asserting that they represent an extraordinary moral choice that disregards the prevalence of child sexual exploitation. The security minister made these remarks during his speech at the PIER23 conference on tackling online harms held at Anglia Ruskin University in Chelmsford.
To address the issue, the government plans to launch an advertising campaign aimed at informing parents about Meta’s choices and the potential impact on child safety. The campaign will employ various mediums such as print, online, and broadcast media, with the objective of urging tech companies to take responsibility and prioritize the protection of children. The Home Office, when approached by the BBC, declined to provide further details regarding the specifics of the campaign.

Meta, the parent company of Facebook, has responded to criticism by arguing that a majority of British people rely on encrypted apps to protect themselves from hackers, fraudsters, and criminals. The company emphasized that it has developed safety measures to combat abuse while maintaining online privacy and security, as it believes that people do not want their private messages to be read.
Meta also highlighted its ongoing efforts to remove and report millions of inappropriate images each month. Even WhatsApp, which is owned by Meta and uses end-to-end encryption, reported over one million incidents in a year. These statements come in response to the government’s concerns regarding encryption plans and campaigns urging Meta to abandon its implementation.
The Information Commissioner’s Office, the data watchdog, has previously supported encryption technology, asserting that it helps safeguard children from criminals and abusers. It urged Meta to proceed with implementing encryption without delay. However, the government’s Online Safety Bill, currently under consideration in Parliament, includes provisions that could empower communication regulator Ofcom to direct platforms to adopt accredited technology for scanning message content.
Messaging platforms such as Signal and WhatsApp have previously stated that they would refuse to compromise the privacy of their encrypted systems if directed to do so. The government contends that technological solutions can be developed to scan encrypted messages for child abuse material. Critics argue that this would require installing software on devices to scan messages before they are sent, a process known as client-side scanning. They assert that such an approach would fundamentally undermine message privacy, akin to digging a hole under a fence.
While Apple attempted client-side scanning, it faced significant backlash and subsequently abandoned the initiative. Former National Cyber Security Centre boss Ciaran Martin suggested in an article that Apple privately opposes the powers outlined in the Online Safety Bill, but the company has yet to publicly state its position on the matter. Recent Freedom of Information requests have revealed that Apple has held four meetings with the Ofcom team responsible for developing policy regarding the enforcement of the relevant section of the bill since April 2022.
China bans major chip maker Micron from key infrastructure projects
China has accused Micron Technology, the leading US memory chip giant, of being a national security threat due to “serious network security risks” associated with its products. The country’s cyberspace regulator, the Cyberspace Administration of China (CAC), announced the ban on Micron’s products from being used in critical infrastructure projects within China. This move signifies China’s first major action against a US chip maker and reflects the deepening conflict between Beijing and Washington over crucial technology. The dispute has witnessed the US imposing several measures on China’s chip manufacturing industry and investing heavily in its own semiconductor sector.
The CAC stated that Micron’s products pose significant security risks to China’s critical information infrastructure supply chain, thereby affecting national security. However, specific details about the identified risks or the Micron products implicated were not disclosed by the CAC.
Micron confirmed that it had received the notice from the CAC and expressed its intention to assess the conclusion and determine the subsequent steps. The company also affirmed its willingness to engage in discussions with Chinese authorities. Meanwhile, the US government responded by declaring its intention to collaborate with allies to address the alleged distortions caused by China’s actions in the memory chip market. It firmly opposed restrictions lacking factual basis and criticized China for contradictory claims of market openness and a transparent regulatory framework.
The news led to a 5.3% decline in Micron’s share price during pre-market trading in the US. Nevertheless, analysts from investment banking group Jefferies believed that the ban’s impact on Micron would be relatively limited since the company does not heavily rely on the Chinese government or telecommunications for the majority of its sales in China. Micron’s customers in China are predominantly concentrated in the smartphone and personal computer sectors.
However, analysts warned that there could be a risk of Micron’s Chinese customers shifting to its competitors, such as Samsung and SK Hynix, both based in South Korea. The US has reportedly urged South Korea not to fill any potential supply gaps created by China’s actions. China is an important market for Micron, accounting for approximately 10% of its full-year sales. In 2022, Micron reported a total revenue of $30.7 billion, with $3.3 billion coming from mainland China. Additionally, Micron has manufacturing facilities in China.
The CAC’s announcement followed a joint statement from the G7 leaders, issued during their meeting in Japan, which criticized China, including its employment of “economic coercion.” US President Joe Biden mentioned that G7 nations were exploring ways to diversify and reduce risks in their relationship with China, emphasizing the need to diversify supply chains.
Notably, Micron’s CEO, Sanjay Mehrotra, participated in the G7 summit in Hiroshima as part of a group of business leaders. Last week, Micron announced a substantial investment of around 500 billion yen ($3.6 billion) in technology development in Japan.
Neeva: Ad-free search engine shuts down
Neeva, a search engine founded by a former Google ad executive, has announced its decision to shut down. The company’s unique approach of offering an ad-free and tracker-free search experience, while asking users to pay for the service, failed to attract a significant number of subscribers.
In addition to its ad-free model, Neeva also introduced AI-generated answers to enhance search results. However, despite these efforts, the founders have acknowledged the challenges of convincing users to switch from established search engines to a new and unfamiliar platform.
The founders cited various factors for the closure, including strong competition from well-established organizations with abundant resources, as well as user reluctance to change default search settings. They also mentioned a changed economic landscape since Neeva’s launch in October.
While shutting down the search engine, the company aims to capitalize on its expertise in AI, particularly in large language models (LLMs) that power chatbots like ChatGPT. Neeva plans to explore commercial opportunities to apply its AI and search capabilities, with a focus on meeting the pressing need for effective, affordable, and responsible use of LLMs.
The company, which employed approximately 50 people and had raised $77.5 million from investors, expressed its commitment to providing updates on its future work and team in the coming weeks.