A decoder-only foundation model for time-series forecasting

Posted by Rajat Sen and Yichen Zhou, Google Research Time-series forecasting is ubiquitous in various domains, such as retail, finance, manufacturing, healthcare and natural sciences. In retail use cases, for example, it has been observed that improving demand forecasting accuracy can meaningfully reduce inventory costs and increase revenue. Deep learning (DL) models have emerged as […]

A decoder-only foundation model for time-series forecasting Lire l’article »

Graph neural networks in TensorFlow

Posted by Dustin Zelle, Software Engineer, Google Research, and Arno Eigenwillig, Software Engineer, CoreML Objects and their relationships are ubiquitous in the world around us, and relationships can be as important to understanding an object as its own attributes viewed in isolation — take for example transportation networks, production networks, knowledge graphs, or social networks.

Graph neural networks in TensorFlow Lire l’article »

DP-Auditorium : Une bibliothèque flexible pour l’audit de la confidentialité différentielle

L’audit de la confidentialité différentielle (CD) est essentiel pour garantir que les mécanismes de protection des données fonctionnent comme prévu. Cependant, le développement de ces mécanismes est complexe et sujet aux erreurs. DP-Auditorium est une nouvelle bibliothèque open source qui permet d’auditer efficacement les garanties de CD avec uniquement un accès blackbox aux mécanismes. **Garanties

DP-Auditorium : Une bibliothèque flexible pour l’audit de la confidentialité différentielle Lire l’article »

Learning the importance of training data under concept drift

Posted by Nishant Jain, Pre-doctoral Researcher, and Pradeep Shenoy, Research Scientist, Google Research The constantly changing nature of the world around us poses a significant challenge for the development of AI models. Often, models are trained on longitudinal data with the hope that the training data used will accurately represent inputs the model may receive

Learning the importance of training data under concept drift Lire l’article »

Advances in private training for production on-device language models

Posted by Zheng Xu, Research Scientist, and Yanxiang Zhang, Software Engineer, Google Language models (LMs) trained to predict the next word given input text are the key technology for many applications [1, 2]. In Gboard, LMs are used to improve users’ typing experience by supporting features like next word prediction (NWP), Smart Compose, smart completion

Advances in private training for production on-device language models Lire l’article »

VideoPrism: A foundational visual encoder for video understanding

Posted by Long Zhao, Senior Research Scientist, and Ting Liu, Senior Staff Software Engineer, Google Research An astounding number of videos are available on the Web, covering a variety of content from everyday moments people share to historical moments to scientific observations, each of which contains a unique record of the world. The right tools

VideoPrism: A foundational visual encoder for video understanding Lire l’article »

Google à l’APS 2024 : Un leader de la recherche quantique

**Introduction** La 2024 March Meeting de l’American Physical Society (APS) se déroule actuellement à Minneapolis, MN. Cette conférence de premier plan couvre un large éventail de sujets en physique et dans des domaines connexes. Google y est fortement présent, avec un stand, plus de 50 présentations et des contributions aux activités d’organisation de la conférence,

Google à l’APS 2024 : Un leader de la recherche quantique Lire l’article »

Croissant: a metadata format for ML-ready datasets

Posted by Omar Benjelloun, Software Engineer, Google Research, and Peter Mattson, Software Engineer, Google Core ML and President, MLCommons Association Machine learning (ML) practitioners looking to reuse existing datasets to train an ML model often spend a lot of time understanding the data, making sense of its organization, or figuring out what subset to use

Croissant: a metadata format for ML-ready datasets Lire l’article »

Social learning: Collaborative learning with large language models

Posted by Amirkeivan Mohtashami, Research Intern, and Florian Hartmann, Software Engineer, Google Research Large language models (LLMs) have significantly improved the state of the art for solving tasks specified using natural language, often reaching performance close to that of people. As these models increasingly enable assistive agents, it could be beneficial for them to learn

Social learning: Collaborative learning with large language models Lire l’article »

Retour en haut