Highlights
How OpenAI is approaching 2024 worldwide elections
We’re working to prevent abuse, provide transparency on AI-generated content, and improve access to accurate voting information.
CES 2024: The weirdest tech, gadgets and AI claims from Las Vegas
CES 2024 is in full swing in Las Vegas. We’re on the ground giving you the most talked about news and announcements from the event, but much of the fun is to be found in the weirder margins of the show floor. In an era of CES where companies are all-in on the AI hype.
Microsoft launches a Pro plan for Copilot
Microsoft evidently envisions Copilot, the umbrella brand for its portfolio of AI-powered, content-generating technologies, becoming a significant future revenue line-item. And that’s perhaps not far off base; according to the company, more than 40% of the Fortune 100 participated in its Copilot early access program.
Advancing transparency: Updates on responsible AI research
Editor’s note: All papers referenced here represent collaborations throughout Microsoft and across academia and industry that include authors who contribute to Aether, the Microsoft internal advisory body for AI ethics and effects in engineering and research. A surge of generative AI models in the past year has fueled much discussion about the impact of artificial […]
The post Advancing transparency: Updates on responsible AI research appeared first on Microsoft Research.
Video
Shruti Ailani - Humans in AI
In this exciting episode of Humans In AI, Shruti Ailani, a senior technical security specialist at Microsoft Netherlands, shares her compelling perspective on the influential role AI is set to play in our work lives and businesses. She discusses how AI can not only improve efficiency but also free up mental space for greater creativity. Given her expertise in security, Shruti highlights how AI can keep us ten steps ahead of potential security threats.
Articles
Whispering LLaMA: A Cross-Modal Generative Error Correction Framework for Speech Recognition
Whispering LLaMA: A Cross-Modal Generative Error Correction Framework for Speech Recognition hucky.
Finetune LLMs on your own consumer hardware using tools from PyTorch and Hugging Face ecosystem
We demonstrate how to finetune a 7B parameter model on a typical consumer GPU (NVIDIA T4 16GB) with LoRA and tools from the PyTorch and Hugging Face ecosystem with complete reproducible Google Colab notebook.
Running Local LLMs and VLMs on the Raspberry Pi
Get models like Phi-2, Mistral, and LLaVA running locally on a Raspberry Pi with Ollama Ollama has emerged as one of the best solutions for running local LLMs on your own personal computer without having to deal with the hassle of setting things up from scratch.
3 Advanced Document Retrieval Techniques To Improve RAG Systems
Query expansion, cross-encoder re-ranking, and embedding adaptors Continue reading on Towards Data Science »
Leverage KeyBERT, HDBSCAN and Zephyr-7B-Beta to Build a Knowledge Graph
In this blog, I intend to explore the efficacy of combining traditional NLP and machine learning techniques with the versatility of LLMs. This exploration includes integrating simple keyword extraction using KeyBERT, sentence embeddings with BERT, and employing UMAP for dimensionality reduction coupled with HDBSCAN for clustering. Employ KeyBERT to extract candidate keywords, which are then refined using KeyLLM, based on Zephyr-7B-Beta, to generate a list of enhanced keywords and keyphrases.
‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says
Pressure grows on artificial intelligence firms over the content used to train their products Business live – latest updatesThe developer OpenAI has said it would be impossible to create tools like its groundbreaking chatbot ChatGPT without access to copyrighted material, as pressure grows on artificial intelligence firms over the content used to train their products. Chatbots such as ChatGPT and image generators like Stable Diffusion are “trained” on a vast trove of data taken from the internet, with much of it covered by copyright – a legal protection against someone’s work being used without permission. Continue reading...
It's OK to call it Artificial Intelligence
We need to be having high quality conversations about AI: what it can and can't do, its many risks and pitfalls and how to integrate it into society in the most beneficial ways possible.