Articles for category: AI Research

Reddit – Heart of the internet

We value your privacy Reddit and its partners use cookies and similar technologies to provide you with a better experience. By accepting all cookies, you agree to our use of cookies to deliver and maintain our services and site, improve the quality of Reddit, personalize Reddit content and advertising, and measure the effectiveness of advertising. By rejecting non-essential cookies, Reddit may still use certain cookies to ensure the proper functionality of our platform. For more information, please see our Cookie Notice and our Privacy Policy. Source link

Reddit – Heart of the internet

We value your privacy Reddit and its partners use cookies and similar technologies to provide you with a better experience. By accepting all cookies, you agree to our use of cookies to deliver and maintain our services and site, improve the quality of Reddit, personalize Reddit content and advertising, and measure the effectiveness of advertising. By rejecting non-essential cookies, Reddit may still use certain cookies to ensure the proper functionality of our platform. For more information, please see our Cookie Notice and our Privacy Policy. Source link

Health-specific embedding tools for dermatology and pathology

Posted by Dave Steiner, Clinical Research Scientist, Google Health, and Rory Pilgrim, Product Manager, Google Research There’s a worldwide shortage of access to medical imaging expert interpretation across specialties including radiology, dermatology and pathology. Machine learning (ML) technology can help ease this burden by powering tools that enable doctors to interpret these images more accurately and efficiently. However, the development and implementation of such ML tools are often limited by the availability of high-quality data, ML expertise, and computational resources. One way to catalyze the use of ML for medical imaging is via domain-specific models that utilize deep learning (DL)

High-Low Frequency Detectors

This article is part of the Circuits thread, an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks. Naturally Occurring Equivariance in Neural Networks Curve Circuits Introduction Some of the neurons in vision models are features that we aren’t particularly surprised to find. Curve detectors, for example, are a pretty natural feature for a vision system to have. In fact, they had already been discovered in the animal visual cortex. It’s easy to imagine how curve detectors are built up from earlier edge detectors, and it’s easy to guess why curve detection

Optimal Experiment Design for Causal Effect Identification

Optimal Experiment Design for Causal Effect Identification Sina Akbari, Jalal Etesami, Negar Kiyavash; 26(28):1−56, 2025. Abstract Pearl’s do calculus is a complete axiomatic approach to learn the identifiable causal effects from observational data. When such an effect is not identifiable, it is necessary to perform a collection of often costly interventions in the system to learn the causal effect. In this work, we consider the problem of designing a collection of interventions with the minimum cost to identify the desired effect. First, we prove that this problem is NP-complete and subsequently propose an algorithm that can either find the optimal

Alternate encoder and dual decoder CNN-Transformer networks for medical image segmentation

Implementation details The implementation of our proposed AD2Former was based on the PyTorch library and Python 3.8. The experiments were conducted on a single NVIDIA RTX 3090 GPU. In our experiments, we resized images to \(224\times 224\). In order to better initialize our model, we used a pre-trained ResNet34 model. After conducting a thorough comparon, we selected the following hyperparameters for each dataset: For Multi-organ Synapse datasets: We preprocessed the resolution of the original images to \(224\times 224\) size using rotation and flipping data augmentation techniques. The model was trained using the SGD optimizer with a batch size of 4

VINC-S: Closed-form Optionally-supervised Knowledge Elicitation with Paraphrase Invariance

$^*$Equal contribution In Spring 2023, a team at EleutherAI and elsewhere worked on a follow-up to CCS that aimed to improve its robustness, among other goals. We think the empirical side of the project was largely unsuccessful, failing to provide evidence that any method had predictably better generalization properties. In the spirit of transparency, we are sharing our proposed method and some results on the Quirky Models benchmark. Introduction# As we rely more and more on large language models (LLMs) to automate cognitive labor, it’s increasingly important that we can trust them to be truthful. Unfortunately, LLMs often reproduce human

Google’s new open model based on Gemini 2.0

For a deeper dive into the technical details behind these capabilities, as well as a comprehensive overview of our approach to responsible development, refer to the Gemma 3 technical report. Rigorous safety protocols to build Gemma 3 responsibly We believe open models require careful risk assessment, and our approach balances innovation with safety – tailoring testing intensity to model capabilities. Gemma 3’s development included extensive data governance, alignment with our safety policies via fine-tuning and robust benchmark evaluations. While thorough testing of more capable models often informs our assessment of less capable ones, Gemma 3’s enhanced STEM performance prompted specific

[2410.14659] Harnessing Causality in Reinforcement Learning With Bagged Decision Times

[Submitted on 18 Oct 2024 (v1), last revised 12 Mar 2025 (this version, v2)] View a PDF of the paper titled Harnessing Causality in Reinforcement Learning With Bagged Decision Times, by Daiqi Gao and 3 other authors View PDF Abstract:We consider reinforcement learning (RL) for a class of problems with bagged decision times. A bag contains a finite sequence of consecutive decision times. The transition dynamics are non-Markovian and non-stationary within a bag. All actions within a bag jointly impact a single reward, observed at the end of the bag. For example, in mobile health, multiple activity suggestions in a

[2411.14871] Preference Alignment for Diffusion Model via Explicit Denoised Distribution Estimation

[Submitted on 22 Nov 2024 (v1), last revised 13 Mar 2025 (this version, v3)] View a PDF of the paper titled Preference Alignment for Diffusion Model via Explicit Denoised Distribution Estimation, by Dingyuan Shi and 3 other authors View PDF HTML (experimental) Abstract:Diffusion models have shown remarkable success in text-to-image generation, making preference alignment for these models increasingly important. The preference labels are typically available only at the terminal of denoising trajectories, which poses challenges in optimizing the intermediate denoising steps. In this paper, we propose to conduct Denoised Distribution Estimation (DDE) that explicitly connects intermediate steps to the terminal