- The Friday AI Dive
- Posts
- New Post
New Post
Welcome to the Friday AI Dive with AIGordon—Stay ahead with our weekly 5-minute deep dive into the latest AI advancements. Learn how AI drives innovation, optimizes business processes, and unlocks growth opportunities—keeping you informed and ahead of the curve.

Top News Of The Week!!!
Elon Shocks The World.
Elon Musk’s xAI has brought online one of the world’s largest supercomputers, "Colossus," powered by 100,000 Nvidia H100 GPUs in just 122 days, the first of its kind, in the world. Located in Memphis, Tennessee, this AI training system is designed to accelerate the development of Musk’s AI chatbot Grok and other AI projects. Colossus will soon double its size with the addition of 50,000 H200 GPUs, further boosting its computing power. Read more.

This Week On The Friday AI Dive
OpenAI, Anthropic sign deals with the US government for AI research and testing
AI Helps To Detect Rare Cardiac Anomalies
Anthropic Closer To Backing The California AI Bill after Lawmakers Made Some Amendments
AI Image Of The Week
Trending AI Tools of the Week
Prompt Of The Week
How To Get Into AI- Weekly Free Courses For Beginners
We Listen: Have a topic you want us to cover? Send an email [email protected] and we will publish it.
Our Report
OpenAI, Anthropic sign deals with US govt for AI research and testing

OpenAI
OpenAI has partnered with the U.S. government to test and evaluate their AI models through the U.S. AI Safety Institute. These agreements aim to ensure the safe and ethical deployment of AI amid increasing regulatory scrutiny, potentially setting a global standard for AI safety and responsible development.
Key Points
OpenAI will provide early access to their AI models to the U.S. AI Safety Institute for evaluation and safety testing.
The partnership focuses on ensuring the safe and responsible development of AI technologies, addressing risks related to ethics and security.
This initiative is part of broader efforts under the Biden administration to set AI safety standards and evaluate AI's societal impacts, such as on jobs and equity.
Why it Matters
This collaboration is significant as it marks a proactive effort by major AI developers to cooperate with government regulators on AI safety. It highlights the growing importance of establishing safeguards around AI use, ensuring it benefits society without compromising ethical standards or safety. As AI technologies are increasingly used in critical sectors like healthcare, law, and education, this effort could lead to global standards in AI governance.
Share Your Thoughts
With AI’s rapid integration into society, how important do you think it is for governments and tech companies to collaborate on AI safety? Could these partnerships serve as a model for the much-needed global AI regulation? Read more
AI Helps To Detect Rare Cardiac Anomalies

A new ECG diagnostic system uses self-supervised anomaly detection pretraining to improve the detection of rare but critical cardiac anomalies. This method addresses the challenge of imbalanced ECG datasets, significantly boosting diagnostic accuracy with 94.7% AUROC, 92.2% sensitivity, and 92.5% specificity for rare ECG types. It improves diagnostic efficiency, precision, and completeness in real-world clinical settings, particularly benefiting emergency care where rapid, accurate ECG interpretation is essential. This advancement marks a crucial step forward for AI integration in cardiology.
Key Points:
This AI-enhanced system introduces a self-supervised anomaly detection pretraining approach to enhance ECG analysis, particularly focusing on rare cardiac conditions that often go undetected.
It achieves 94.7% AUROC, 92.2% sensitivity, and 92.5% specificity for rare ECG anomalies, significantly improving accuracy compared to traditional methods.
In clinical settings, this approach improved diagnostic efficiency by 32%, precision by 6.7%, and completeness by 11.8%, showing its practical benefits for healthcare providers.
Why it matters:
This study addresses the longstanding issue of under-detection in rare but critical cardiac anomalies, improving diagnosis accuracy and clinical outcomes, especially in emergency care where timely and precise ECG interpretation is crucial.
Share your thoughts:
How do you think this AI-driven approach might transform the speed and reliability of cardiac diagnoses in clinical settings, especially in emergencies? Read more
Anthropic Closer To Backing The California AI Bill after Lawmakers Make Some Amendments

Anthropic CEO, Dario Amodei
Remember that last week we told you about California’s controversial Artificial Intelligence bill (SB 1047), well this week, Anthropic the AI safety and research company behind the popular Large Language Model (LLM) product, Claude Sonnet has endorsed it, emphasizing the need for regulations to prevent AI-related harm. The bill has sparked a divide in the tech industry, with some companies fearing it could hinder innovation. The decision could shape the future of AI regulation globally.
For more details, you can access the full article
Key Points:
Anthropic's Position: Anthropic, a leading AI company, has voiced strong support for California’s AI safety bill (SB 1047), which mandates safety testing for large AI models to prevent potential harm.
Industry Division: The bill has caused a significant divide in the tech industry, with major players like Google and Meta opposing it, arguing that it could stifle innovation.
Focus on Safety: Proponents, including Anthropic, argue that safety regulations are crucial to protect society from the unintended consequences of rapidly advancing AI technology.
Why It Matters:
This news is crucial because it highlights the growing tension between innovation and regulation in the AI sector. As AI technologies evolve, the balance between fostering innovation and ensuring public safety becomes increasingly delicate. The outcome of this debate could set a precedent for AI regulation not just in California, but globally, influencing how AI development is managed in the future.
Share Your Thoughts:
Will stringent safety regulations like SB 1047 protect society from AI risks, or will they stifle innovation and hold technological progress? Read More
Image Of The Week
Every week, one reader will get the chance to showcase their AI-generated Image for free. So send in your Images (one per person) and prompts to be considered.

This week’s Image Creator- Mehana Braimah.
Prompt: A vividly desolate scene showcasing the detrimental effects of littering on the environment: garbage strewn across a once-pristine landscape, plastic bottles tumbling in the wind, and decaying food containers scattered among dying flora. This striking image, perhaps a digitally rendered artwork, exudes a sense of urgency and despair.
Pro Tip: Whenever you generate an image from any of the AI text-to-image tools like LeonardoAI, ChatGPT-DALL-E and others, make sure you include “SD3 Image’’ to your prompt to give it the best resolution and clearest output.
SD3: Stable Diffusion
Trending AI Tools of the Week
AgentGPT: for creating, configuring, and deploying AI agents
Jobtailor.ai: For getting AI-related jobs
GetGenerativeAI: turns inputs into proposals for sales consultants- a must for sales consultants
Prompt of the Week
How to Create 10 Urgent and Exclusive Email Subject Lines that Maximize Open Rates for Promoting an Online Digital Marketing Course
Type this Prompt into ChatGPT/Claude: “Craft 10 compelling email subject lines that will maximize open rates for a content marketing campaign promoting a digital product.
Recommended Resources
Ensure you visit our YouTube channel for more thought-provoking insights about Artificial Intelligence.
Thank you for your time.
And to make sure you get everything we have to offer, if this email landed in your Promotion or Spam folder, drag it into your Inbox so you get the next edition. See you on Friday!
-Al Braimah-Founder AIGordon
PS: Was this email forwarded to you? Sign Up
PPS: What’s the #1 thing that made you want to check out this newsletter? could you reply and let us know? We read every reply.