Highlights
Introducing Aurora: The first large-scale foundation model of the atmosphere
Aurora, a new AI foundation model from Microsoft Research, can transform our ability to predict and mitigate extreme weather events and the effects of climate change by enabling faster and more accurate weather forecasts than ever before.
The post Introducing Aurora: The first large-scale foundation model of the atmosphere appeared first on Microsoft Research.
GPT-2 five years later
Jack Clark, now at Anthropic, was a researcher at OpenAI five years ago when they first trained GPT-2. In this fascinating essay Jack revisits their decision not to release the full model, based on their concerns around potentially harmful ways that technology could be used.
A Deep Dive into In-Context Learning
Knowledge Graph-Augmented Generation (KGAG) is another dynamic prompting approach that integrates structured knowledge graphs to transform the task to be solved and hence enhance the factual accuracy and informativeness of language model outputs. We also discussed how domain adaptation techniques, such as in-context learning and fine-tuning, can be leveraged to overcome the “comfort zone” limitations of these models and create enterprise-grade, compliant generative AI-powered applications.
This Week in AI: Can we (and could we ever) trust OpenAI?
Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own. By the way, TechCrunch plans to launch an AI newsletter […]
Paper of the week
Better interpretable neural networks with Kolmogorov networks
As more governments are considering regulating AI, it is important that we gain a better understanding of how ML solutions work internally. Using neural networks in your solution makes it harder to see what happens. Some researchers are finding ways to make neural networks easier to interpret. This paper for example, uses a new way to train neural networks to make them easier to interpret. Well worth a read!
Video
AI Show LIVE | Boosting Phi-3’s Language Capabilities with Azure AI Translator
Join us on The AI Show where we explore how Azure AI Translator enhances Phi-3's multilingual translation quality. Discover the transformative impact of integrating advanced algorithms for low-resource languages and how this synergy bridges communication gaps with remarkable precision.
Building a copilot: Azure AI Studio or Copilot Studio | BRK203
Are you interested in creating a copilot for your business or a client and need some guidance on how to begin and which studio to select? Daniel and Henk wil...
Articles
Scarlett Johansson won’t save us from AI – but if workers have their say, it could benefit us all | Peter Lewis
Social media has taught us that technology is neither innately good or bad. AI must be approached with this same critical mindset Follow our Australia news live blog for latest updatesGet our morning and afternoon news emails, free app or daily news podcastTech overlord Sam Altman’s legal skirmish with actor Scarlett Johansson brings the blurred lines between artificial intelligence and the world it seeks to transform into sharper focus.
Getting started with Semantic Kernel
In last months, we witnessed the extraordinary capabilities of Large Language Models (LLMs) like ChatGPT. However, the real paradigm shift occurred when we started embedding those LLMs within our applications. This implies the integration of a new set of LLMs-related components within our application logic, including memory, metaprompt, plug-ins and so on.
Build Your Own ChatGPT-like Chatbot with Java and Python
Modern deployed models, which need to process vast amounts of data from many users simultaneously, are not only large in size but also require substantial computing resources for performing inference tasks and providing quality service to their clients. Initially, the primary system functionality will be to allow a client to submit a text query, which is processed by an LLM model and then returned to the source client, all within a reasonable timeframe and offering a fair quality of service.
TinyAgent: Function Calling at the Edge
The ability of LLMs to execute commands through plain language (e.g. English) has enabled agentic systems that can complete a user query by orchestrating the right set of tools (e.g. ToolFormer, Gorilla).
The Crossroads of Innovation and Privacy: Private Synthetic Data for Generative AI
Synthetic data could potentially help address some privacy concerns with AI model development and training, but it comes with limitations. Researchers at Microsoft are exploring techniques for producing more realistic data with strong privacy protections.
The post The Crossroads of Innovation and Privacy: Private Synthetic Data for Generative AI appeared first on Microsoft Research.
Deploying “whisper” from HF Model Hub onto Azure ML
Using Whisper with Azure Speech gives certain capabilities such as diarization (keeping track of individual speakers), word level timestamps (important to synch captions with audio / video), batch Inference API (files <1GB) and an upcoming capability for fine-tuning. Unlike standard ASR (automated speech recognition) models Whisper employs a unique generative inference procedure.
The ugly truth behind ChatGPT: AI is guzzling resources at planet-eating rates | Mariana Mazzucato
Big tech is playing its part in reaching net zero targets, but its vast new datacentres are run at huge cost to the environment.