

AI Post — Artificial Intelligence
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
Bill Gates predicts that artificial intelligence will replace professionals such as doctors, teachers, and mental health workers within the next decade. During an appearance on The Tonight Show, he expressed optimism about AI's potential to provide widespread access to quality medical advice and tutoring, making expert knowledge more commonplace. Gates acknowledged the challenges this shift brings, questioning what jobs will look like in the future and how society will adapt to these changes.
He emphasized that while AI could significantly reduce the need for human involvement in many tasks, there will still be areas where human touch is valued, such as in sports or creative endeavors. Gates also highlighted his optimism regarding advancements in health and climate innovation, reflecting on his upcoming memoir and the transformative power of technology.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
OpenAI has raised $40 billion in one of the largest private funding rounds in history, bringing its post-money valuation to $300 billion. The funding round was led by SoftBank, with participation from Microsoft, Coatue, Altimeter, and Thrive.
OpenAI stated that this investment will help advance AI research, scale its computing infrastructure, and enhance tools for the 500 million weekly ChatGPT users. Notably, around $18 billion of the funds will be allocated to the Stargate infrastructure project, which aims to establish a network of AI data centers across the U.S.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
SoftBank is in discussions to secure a $16.5 billion bridge loan to support its AI investments in the U.S., potentially marking the largest dollar-denominated loan in the company's history. This loan follows SoftBank's recent investment in OpenAI's $40 billion funding round, which values the company at $300 billion.
The loan is intended to enhance SoftBank's AI expansion, focusing on infrastructure such as data centers and robotics. Notably, SoftBank is a key financial backer of the Stargate AI project, which aims to develop advanced AI infrastructure and create over 100,000 jobs in the U.S. The first phase of this initiative has a budget of $100 billion for high-performance data centers.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
Hollywood Strikes Back: AI Takes Over the Oscars 🎬
AI was Hollywood’s biggest villain during the strikes—now it’s in Oscar-winning films. Emilia Perez and The Brutalist used AI, even helping Adrian Brody perfect his accent
The Battle Isn’t Over🔥
Writers and actors are suing tech giants for stealing copyrighted work, but Disney and Paramount? Silent. Meanwhile, OpenAI & Google lobby to change US copyright laws—arguing AI needs free access to art to beat China
Hollywood Fights Back🚨
400+ A-listers, including Ben Stiller & Paul McCartney, demand protections, while actors picket against AI voice cloning
AI is reshaping Hollywood—but will artists survive the takeover?🎭
@aipost🪙 | Our X 🥇
AI was Hollywood’s biggest villain during the strikes—now it’s in Oscar-winning films. Emilia Perez and The Brutalist used AI, even helping Adrian Brody perfect his accent
The Battle Isn’t Over
Writers and actors are suing tech giants for stealing copyrighted work, but Disney and Paramount? Silent. Meanwhile, OpenAI & Google lobby to change US copyright laws—arguing AI needs free access to art to beat China
Hollywood Fights Back
400+ A-listers, including Ben Stiller & Paul McCartney, demand protections, while actors picket against AI voice cloning
AI is reshaping Hollywood—but will artists survive the takeover?
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
Sir Nigel Shadbolt, co-founder of the Open Data Institute and one of the UK’s leading AI ethicists, cautioned that AI-powered toys, while potentially beneficial for children’s learning, could also collect sensitive data. For instance, intelligent bears like “Poe the AI Story Bear” can generate stories and engage in conversations, raising concerns about surveillance, behavioral tracking, and the potential long-term psychological effects on children.
Shadbolt emphasized the urgent need for regulation to prevent these toys from flooding the market. In a stark contrast to the past concerns about teddy bears’ eyes being swallowed, Shadbolt highlighted the immediacy of this new challenge, stating, “In the past, we worried about teddy bears’ eyes being swallowed. This is an entirely different class of challenge. It is quite immediate.”
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
This media is not supported in your browser
VIEW IN TELEGRAM
Matt McDonagh says once we build AGI, it may write its own constitution. Even if we embed a code of ethics, it could evolve new principles we can't predict.
"It could try to upgrade its system prompt in a way to help serve the humans better, but perhaps we don't know what it's gonna think that is."
@aipost🪙 | Our X 🥇
"It could try to upgrade its system prompt in a way to help serve the humans better, but perhaps we don't know what it's gonna think that is."
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
Meta is testing a new feature on Instagram that uses AI to suggest comments for users' posts. Spotted by a user, the "Write with Meta AI" option allows individuals to receive three AI-generated comment suggestions by tapping a pencil icon next to the text bar. For example, comments like “Cute living room setup” or “Great photo shoot location” are generated based on the post’s content.
While Meta aims to enhance user interactions, there are concerns that such AI-generated comments could be seen as inauthentic, detracting from genuine engagement. The company has previously tested similar features on Facebook, but it remains unclear when or if this new feature will be widely available.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
This media is not supported in your browser
VIEW IN TELEGRAM
AI Models Are Changing How We Classify Stars 💫
Astronomers just hit a 99% accuracy rate in classifying stars using AI. A new study tested StarWhisper LightCurve, a deep learning system trained on NASA’s Kepler data🪐
The system blends multiple AI techniques:
✅ Text-based classification using Gemini 7B
✅ Image-based interpretation with DeepSeek-VL-7B-Chat
✅ Audio analysis via Qwen-Audio
Less manual tuning, faster results. AI is reshaping how we explore the universe🚀
@aipost🪙 | Our X 🥇
Astronomers just hit a 99% accuracy rate in classifying stars using AI. A new study tested StarWhisper LightCurve, a deep learning system trained on NASA’s Kepler data
The system blends multiple AI techniques:
Less manual tuning, faster results. AI is reshaping how we explore the universe
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
Google DeepMind is pushing for long-term AI safety planning as AGI approaches
AXIOS article reported:
- Google warns AGI could arrive by 2030
- DeepMind has a plan to deal with AI risks that could seriously harm humans
These risks are in four main areas:
• People misusing AI
• AI making mistakes
• AI behaving unexpectedly
• AI systems causing harm together
AI systems can find unexpected ways to work, which could lead to problems
@aipost🪙 | Our X 🥇
AXIOS article reported:
- Google warns AGI could arrive by 2030
- DeepMind has a plan to deal with AI risks that could seriously harm humans
These risks are in four main areas:
• People misusing AI
• AI making mistakes
• AI behaving unexpectedly
• AI systems causing harm together
AI systems can find unexpected ways to work, which could lead to problems
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
Three AI systems were tested in 5-minute chat interactions:
• Gpt-4o
• Gpt-4.5
• Llama-3.1 405b
According to the paper:
- Gpt-4.5 was classified as human 73% of the time, surpassing human participants
- Llama-3.1 405b received a 56% human classification rate, similar to humans
- Gpt-4o was identified as AI 79% of the time and only perceived as human 21% of the time
AI systems can achieve near-human or superior human-like behavior with the right prompts
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
Please open Telegram to view this post
VIEW IN TELEGRAM

AI Post — Artificial Intelligence
DeepMind is reportedly delaying the release of its AI research to maintain a competitive advantage for Google. The organization has implemented a stricter vetting process that makes it more difficult to publish studies, particularly those revealing innovations that could benefit competitors or negatively impact Google’s Gemini AI model. This marks a shift from DeepMind's previous reputation for openness in research.
The new policies include a six-month embargo on strategic papers and require approval from multiple staff members for publication. While DeepMind still publishes numerous papers annually, the increased bureaucracy has frustrated some researchers and contributed to departures. Critics argue that the focus has shifted from public research contributions to product commercialization.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM