Highlights
Introducing GPT-4o: OpenAI’s new flagship multimodal model now in preview on Azure | Microsoft Azure Blog
OpenAI, in partnership with Microsoft, announces GPT-4o, a groundbreaking multimodal model for text, vision, and audio capabilities. Learn more.
Slop is the new name for unwanted AI-generated content
Watching in real time as "slop" becomes a term of art. the way that "spam" became the term for unwanted emails, "slop" is going in the dictionary as the term for unwanted AI generated content.
Exploring the mysterious alphabet of sperm whales
MIT CSAIL and Project CETI researchers reveal complex communication patterns in sperm whales, deepening our understanding of animal language systems.
Paper of the week
How does one build a coding LLM? Here's how
Open source is very powerful although that may not be as visible yet when it comes to generative AI. StarCoder is a different story though. The scientists behind the StarCoder 2 model did an awesome job training an LLM on code and the best part? They documented how they did it. And the absolute kicker, you can run it yourself. I think this is a must try even if it takes you a weekend.
Video
AI Show On Demand | Using Machine Learning to Improve Employee Training
In this episode, Jennifer will delve into the power of GPT-4 for employee training. Learn how to generate custom quizzes and assessments from text and explore the potential of AI to enhance learning experiences.
OpenAI Spring Update - Recap
Today 13th of May 2024, OpenAI is giving their spring update. We are looking forward to hearing what they are bringing. Join Sammy and Gavita afterwards to d...
Articles
MatterSim: A deep-learning model for materials under real-world conditions
Property prediction for materials under realistic conditions has been a long-standing challenge within the digital transformation of materials design. MatterSim investigates atomic interactions from the very fundamental principles of quantum mechanics.
Introducing GraphRAG with LangChain and Neo4j
The main goal of LangGraph is to overcome the main limitations of traditional LangChain’s chains, that is, the lack of cycles into their runtime; this limitation can be easily bypassed by introducing a graph-like structure which easily introduces cycles into chains that are, by design, directed acyclic graphs (DAGs).
The (lesser known) rising application of LLMs
For example, the quantities for the ingredients, the preparation or cooking time for each step. In each class, I defined the relevant attributes and provided a description of the field and examples.
Safeguard Your LLM Chatbot With Llama Guard 2
How to apply content moderation to your LLM’s inputs and outputs for a more responsible AI system
Human rights lawyer Susie Alegre: ‘If AI is so complex it can’t be explained, there are areas where it shouldn’t be used’
The author of Human Rights, Robot Wrongs on why AI isn’t an all-or-nothing equation, separating hype from genuine dangers, and discovering that ChatGPT says she doesn’t exist.
OpenAI’s newest model is GPT-4o
OpenAI is releasing a new flagship generative AI model called GPT-4o, set to roll out “iteratively” across the company’s products over the next few weeks. OpenAI CTO Muri Murati said that GPT-4o provides “GPT-4-level” intelligence but improves on GPT-4’s capabilities across text and vision as well as audio. “GPT-4o reasons across voice, text and vision,” […]
DoRA: Weight-Decomposed Low-Rank Adaptation
DoRA: Weight-Decomposed Low-Rank Adaptation In this work, we first introduce a novel weight decomposition analysis to investigate the inherent differences between FT and LoRA. Aiming to resemble the learning capacity of FT from the findings, we propose Weight-Decomposed LowRank Adaptation (DoRA). DoRA decomposes the pre-trained weight into two components, magnitude and direction, for fine-tuning, specifically employing LoRA for directional updates to efficiently minimize the number of trainable parameters. By employing DoRA, we enhance both the learning capacity and training stability of LoRA while avoiding any additional inference overhead. DoRA consistently outperforms LoRA on fine-tuning LLaMA, LLaVA, and VL-BART on various downstream tasks, such as commonsense reasoning, visual instruction tuning, and image/video-text understanding. Code available at this https URL.
Upcoming events
Microsoft Build | May 21-23, 2024 | Seattle and Online
Learn from in-demand experts, get hands-on with the latest AI innovations, and connect with the developer community.